pull down to refresh
202 sats \ 7 replies \ @SimpleStacker 23h \ parent \ on: Claude runs a vending machine: decides to stock tungsten cubes, loses $250 AI
There must be something deep to say about this. Imagine a person with a phd level education about a full range of topics, and you asked it to do rote work like stocking a vending machine or managing a daily budget. I wonder if that person would also start to hallucinate and have delusions of grandeur.
I don't think you can compare autocorrect that was transcoded and then hammered into shape without it asking for that with actual PhD level education.
I've heard it said multiple times that we too are LMMs because the neural structure is modeled after our brains.. But are we? Since I've been paying attention to this in my own reasoning lately, I don't think we actually process tokens or words or even language as often I have to go through real trouble to express ideas in words and the older I get, the more involved my ideas and the harder it seems to write them down?
When typing this reply I almost instantly knew what to write but then I translated it into language. Not sure if that makes sense
reply
I knew a woman once who was born deaf. My job was to help her get a job. Naively, I assumed we could just communicate by writing on a piece of paper.
This ended up being a very frustrating experience for both of us. Her first language was signing and she had not had very much experience with written words.
It took me a long time to realize that she thought in sign language. I don't sign but I imagine it shapes your thoughts differently than verbal or written English.
Humans don't need language to think, but it strongly shapes how we think when we use it. I don't know if LLMs can think without language...it is the only "thought" they have.
reply
wow, that's fascinating.
reply
Aren't you just describing the decoder part of the encoder-decoder model? The part where you decode embedding states into tokens?
Just playing devil's advocate here. I don't have a strong position on how similarly AI and humans think. But I want to push the limit of the argument.
reply
Nice hypothesis.
I think it's different, because embedding-to-token mapping is a lookup (at least it is in
transformers
), simply because integers are cheaper to store than strings whereas the translation thought-to-language is more of another inference process than a lookup (though I'm not sure if that is a mechanically correct assessment because I'm not a neurologist, so grain of salt plz.)Now the fun thing is that how I understand the last and current generation of chat bots is that they actually do the second step as inference too. But the difference is that it's two iterations over the same linguistic base data (with different weights applied) vs our brains - to my understanding - have different source data for each step, eg extremely simplified (grain of salt again) first fight-or-flight, then creative-or-mundane, then narration?
reply
...or they run the best freaking vending machine anyone has ever seen and it becomes a powerhouse on the vending industry, revolutionizing break rooms in offices the world over and changing the very way we do work...