pull down to refresh
17 sats \ 3 replies \ @0xIlmari 2 May \ on: The Intelligence Curse AI
Lmao, we're not even close.
reply
There are multipliers. The chips are improving at a rate of about 3-4x per year. One can see this in the progress of Nvidia’s GPUs from Hopper to Blackwell and soon Ruben and Feynman. The progress is not just in logic but also in memory bandwidth (HBM) and rack architecture (eg NVL72). A 100x improvement in chip performance over the next 4 years is quite possible. Expect the unexpected.
reply
Throwing more hardware at this problem will only make the AIs faster and cheaper, not better.
The models are still dumb and don't show a shred of intelligence.
They still can't reliably count the"r"s in "strawberry".
You can't play chess and other games with them.
They continue to hallucinate every time time the task pertains to a hole in their knowledge. Totally incapable of saying "I don't know".
They are hopeless at creative story writing because they have the attention span of a goldfish and can produce discontinuities between adjacent paragraphs, despite massive context windows.
And don't even get me started at coding. They can barely comprehend what's going on in a single file in an actual project, much less a whole codebase. Again, despite growing context windows. They can't even do a proper code review and spot glaring errors.
The current model architecture has visibly plateaued. All progress is approaching an asymptote.
Intelligence is not as trivial as text completion. Anyone thinking otherwise is delusional, has been fooled (that's you, repeating NVidia's talking points) or outright lying (Sam Altman) to waste more investor money.
We need a completely new model architecture, then another decade of iterating on it, and then we can re-evaluate whether we're anywhere close to AGI.
That is, if AGI is even possible to achieve on digital computers.
reply