🏁 Conclusions about AI

  • The above is a great video for people short on time, wishing to know the challenges with generating a single superhuman AI.
  • In the paper were 5 pretraining datasets of 34 multimodal vision-language models, generating over 300GB of data artifacts of images.
  • Findings reveal that across concepts, significant improvements in zero-shot performance require exponentially more data, following a log-linear scaling trend.
  • This pattern persists despite controlling for similarities between pretraining and downstream datasets or even when testing models on entirely synthetic data distributions.
  • This underlines a critical reassessment of what “zero-shot” generalization entails for multimodal models, highlighting the limitations in their current generalization capabilities.
  • In other words, as is noted in the video, the idea that we can train generalised AI on many many cats & dogs and expect it to generate accurate images of rare elephants is likely limited.
  • There seems to be a law of diminishing returns & value obtained from training a generalised AI to be an oracle of all knowledge (or in the case of this example an oracle of all images). It’s likely on current trajectory to be too cost prohibitive for now.
I remember watching 3blue1brown's recent videos (or one of the channels he links, not sure) and the consensus still seemed to be 2x your amount of training data will 2x the performance of your model, hence the race to more data and bigger models rather than algorithmic improvments.
reply
110 sats \ 1 reply \ @k00b 10 May
I love computerphile!
reply
Nerd alert 🤓
reply
Has humanity peaked?
reply
now that’s a post awaiting zaps
reply
No no. It is barely starting, not peaked. "We are so early". That phrase sounds familiar hmmmm heh
Look for example 1.58 bit weights quantisation which is a totally new way to train. We'll need new chips but it will be much more efficient. There are people researching new AI methods that resemble more how the human brain works, which is excellent in efficiency.
reply
Hard to imagine it has peaked.
reply
Agree, but the argument is that the idea of having one dataset and model for everything is not infinitely scalable. There are diminishing returns there. Perhaps the title of their video is a bit disingenuous.
reply
I agree with this. AI has come to stay and it will continue getting better (at least to some extent). But I also think it is overhyped and I have doubts if we will ever see artificial general intelligence (AGI). Chat GPT is good at guessing the next word in a sentence, but is it really intelligent?
reply
100 sats \ 2 replies \ @gmd 10 May
Surely we can admit it displays multiple measures of intelligence and reasoning. I like the example Ilya Sutskever discusses with Jensen Huang- if you train an LLM on text and then feed it a mystery novel and at the end ask it "who is the murderer?" and it gets the answer right, is that not demonstrating that the model is fundamentally learning not just about guessing words but also developing an understanding and internal model of the outside world?
I think we also forget how stupid the bottom 25% of humans are... if ChatGPT is smarter than 90% of people on 90% of things, surely we can grant that it has intelligence even if it is not AGI.
reply
That's a good comparison, but the essential question is - will these models ever be so good that will be able to do that? If they are at some point then the question whether the model is intelligent, becomes irrelevant (Edsgar Dijkstra made the comparison that the question of whether machines could think was “about as relevant as the question of whether Submarines Can Swim"). However I still have some doubts.
reply
I think they can already do this sort of reasoning to a significant extent... at least for simple stuff.
reply
I guess that makes sense
reply
Likely but not now. Computerphile is reaching conclusions very early for almost everything.
reply
They do mention that it is just 1 paper and that it’s a ‘wait and see’.
But it’s encouraging to hear someone without much of an incentive (besides creating clickbaity titles) arriving at a different conclusion to the mainstream narrative of an AI takeover.
Draw your own conclusions of course.
reply
Yes, you're right. We may draw our own conclusions.
There's so much lined up on the subject of AI that I don't see this shit stopping for many years from now.
reply
The more that people and machines use generative AI to publish content, the more crap will feed the same scrapers that train generative AI. I expect diminishing returns over time as this thesis plays out. There will be a return to quality and substance as hallucinations get compounded by GIGO - garbage in garbage out. Having a library filled with great books will offer much better signal than all the endless noise.
reply
There are still obstacles to overcome before achieving truly advanced AI! AI is progressing, but to fully unlock its potential, it's essential to develop and use increasingly efficient and innovative methods. But it's still in its early stages..
reply
Yes. I believe it's over for AI
reply