(i've decided to stay and use SN as a rant space for my ideas, please bear with my indecision)
AI will not lead to AGI, they're next token predictors and no, I'm not an internet schizo doomer I'm basing my claims off of facts.
1. We have never been able to embed common sense into artificial systems
This is one of the reasons why symbolic AI lost its momentum after the initial optimism of the 60s and 70s and had to be revived through the invention of back propagation.
2. LLMs fail at trivial tasks such as spatial reasoning
They absolutely suck at spatial tasks such as imagining object rotation in 3D space and so on, they also suck at vision tasks due to them not understanding distances and physics (images are converted into tokens which the transformer architecture.. predicts)
ARC-AGI 2 is a good benchmark to note this failure in performance, most models fail to cross 10 percent whereas humans are at a solid 100 percent.
3. The solution is not more data and scaling
People like Scam Altman (scam because look at "worldc*in, the most dystopian shit i've read in a decade"), have repeatedly parroted the claims that more data and more compute will solve the issue of LLMs being inherently dumb, the way tokenization works (i.e. how data is represented) itself is flawed, leading to issues such as the LLMs not knowing how many r's in a strawberry.
4. LLMs learn patterns in their data, it's a weird case of overfitting
Ask an LLM to generate a random number between 1-6 without code or the internet and it will be 4 most of the times, this and the use of em dashes are all patterns it had learnt from its data, where dice rolls resulted mostly in 4.
5. LLMs simply do not understand math
As stated in the previous point, they simply have memorized the fact that 2+2=4 and not that adding two integers gives you another integer, there is no computation of that operation rather just statistics and probability at play (softmax and its consequences)
Read Anthropics paper (Biology of an LLM) for more info on how LLMs are REALLY DUMB
All these are just a few points I have shared here, maybe someday I'll have a more detailed and technical blogpost about this. Now coming to the real issue, LLMs and AI API wrappers have redirected precious research funds towards these statistical parrots, and not only that a recent study by Stanford I think showed that people who over relied on ChatGPT showed signs of cognitive "decay".
We need more physics based and biologically grounded models, bio inspired and neurosymbolic models as Yann LeCun and Gary Marcus have said, instead all we have is a glorified markov chain. fml.
Such machines will never cure cancer, or come up with a solution for poverty and world hunger. It's a VC money burning shitshow, I would like to know your thoughts on this.
facts
, link something that proves the fact, please.