Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.“Developing superintelligence is now in sight,” says Mark Zuckerberg, heralding the “creation and discovery of new things that aren’t imaginable today.” Powerful AI “may come as soon as 2026 [and will be] smarter than a Nobel Prize winner across most relevant fields,” says Dario Amodei, offering the doubling of human lifespans or even “escape velocity” from death itself. “We are now confident we know how to build AGI,” says Sam Altman, referring to the industry’s holy grail of artificial general intelligence — and soon superintelligent AI “could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.”Should we believe them? Not if we trust the science of human intelligence, and simply look at the AI systems these companies have produced so far.The common feature cutting across chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and whatever Meta is calling its AI product this week are that they are all primarily “large language models.” Fundamentally, they are based on gathering an extraordinary amount of linguistic data (much of it codified on the internet), finding correlations between words (more accurately, sub-words called “tokens”), and then predicting what output should follow given a particular prompt as input. For all the alleged complexity of generative AI, at their core they really are models of language.
pull down to refresh
related posts
33 sats \ 4 replies \ @optimism 3h
I agree that it isn't, and even the result of training isn't true cognition, because the LLM has no concept of consequences on its answers. It has nothing at stake. If I'd get a single sat for every time an LLM gives a bad answer, I'd have a bigger stack than Saylor right now.
reply
0 sats \ 3 replies \ @0xbitcoiner OP 3h
Ahahah! I don't know, that's too many sats!
696,676.49 x 100,000,000 = 69 667 649 000 000 sats
https://strategytracker.com/
reply
33 sats \ 2 replies \ @optimism 3h
That's like only a year of Gemini global usage.
reply
100 sats \ 1 reply \ @0xbitcoiner OP 3h
But the answers ain't all bad, right? Ahahah
reply
33 sats \ 0 replies \ @optimism 3h
IDK. Gell-Mann says it's all bullshit unless its doing actual tool calls (search and such)
reply
33 sats \ 0 replies \ @Scoresby 24m
Language is a way of organizing intelligence. Seems to be something that fits particularly well with our brains.
But I've worked with people who have no language skills (born deaf/dumb -- and I'm pretty sure whatever understanding of language they had was so wildly different than mine that it is fair to call it something other than language) and who yet were plenty intelligent.
Also, I'd say lots of animals are intelligent, even if they can't use language.
What LLMs are doing is just something other than intelligence.
reply
21 sats \ 0 replies \ @88b0c423eb 2h
I can confirm, I speak a few languages and yet I am very stupid.
reply