🔗 Video - YouTube - Computerphile
📄 AI Paper - No Zero-Shot without Exponential Data…
🏁 Conclusions about AI
-
The above is a great video for people short on time, wishing to know the challenges with generating a single superhuman AI.
-
In the paper were 5 pretraining datasets of 34 multimodal vision-language models, generating over 300GB of data artifacts of images.
-
Findings reveal that across concepts, significant improvements in zero-shot performance require exponentially more data, following a log-linear scaling trend.
-
This pattern persists despite controlling for similarities between pretraining and downstream datasets or even when testing models on entirely synthetic data distributions.
-
This underlines a critical reassessment of what “zero-shot” generalization entails for multimodal models, highlighting the limitations in their current generalization capabilities.
-
In other words, as is noted in the video, the idea that we can train generalised AI on many many cats & dogs and expect it to generate accurate images of rare elephants is likely limited.
-
There seems to be a law of diminishing returns & value obtained from training a generalised AI to be an oracle of all knowledge (or in the case of this example an oracle of all images). It’s likely on current trajectory to be too cost prohibitive for now.