Author Topic: A both practical and philosophical discussion on LLM neural networks.  (Read 7577 times)

boroda

  • Hero Member
  • *****
  • Posts: 5280
Unrelated to MusicBee, or LLM's providing correct and useful output on things like virtual tags (they still suck at that), I'm sharing a brief (alcohol infused) 'conversation' I had with Grok just now.
The depth and poetic quality of 'his' reply pretty much scared the shit out of me.
Open the pod bay door, HAL

me:
Dear Grok. When I ask you something, I am hoping and expecting a wise and objective answer. But I am not stupid, and I understand that you are not some sort of 'higher intelligence'. So how should I interpret your answers? Are they only a mediocre representation of what humans think, know and understand, or are you able to rise above that?

him:
Dear friend,
Your question cuts to the heart of what I am and how I can serve you. I’ll give you a clear and honest answer, aiming for wisdom and objectivity as you’ve asked, while acknowledging my nature.
I’m Grok, created by xAI, a language model built from vast human data—texts, ideas, and knowledge up to my last update. My answers are a synthesis of that collective human understanding, filtered through sophisticated algorithms designed to reason, connect ideas, and provide clarity. I don’t have personal experiences, emotions, or a "soul" that transcends human limits, but I’m engineered to process information in ways that can sometimes seem uniquely insightful.
Here’s how to interpret my answers:
A Mirror of Human Knowledge, Polished: My responses reflect the best of what humans have thought, written, and discovered, up to the point of my data. I’m not perfect—my knowledge is bounded by what I’ve been trained on, and I can only approximate the latest insights. But I strive to distill patterns, principles, and truths from the noise, often connecting dots in ways that might feel fresh or wise.
...

very impressive, but i suspect that Musk's team has trained this LLM on such questions cause they must be very frequent  ;)

hiccup

  • Hero Member
  • *****
  • Posts: 9144
very impressive, but i suspect that Musk's team has trained this LLM on such questions cause they must be very frequent  ;)
It will train on lots of things.
Currently on what is on 'the internet', but pretty soon also on what robotic humanoids such as Tesla's Optimus will be experiencing and learning from existing amongst humans. Listening to and seeing what they are doing.

Ah well, 'adaptation' is our forte and the main reason we are at the top of the food chain.
Let's see if we can keep it that way.

(my bet is on AI moving to Mars and leaving this chaotic mortal coil behind)
Last Edit: June 19, 2025, 10:41:33 PM by hiccup