Watched a talk where Chomsky suggests many AI researchers are empiricists (ie believe most of AI's problems will be solved with more parameters), and failing to make progress on aspects of intelligence that are innate and aren't learned.
I found this pretty compelling and I hope there are researchers listening.
Talk is here:
Sidebar: Chomsky got old (and I guess so did I) since the last time I was paying attention to him!
Now that I've gotten to play with ChatGPT, I agree. It can't count. It can't understand sequences or logic. It's just a lucky parrot with lots of memory.
They still sound like a bunch of luddites though imo.
reply
I agree. They sound like haters which is upsetting because I think they also have a point.
reply
Important to remember this is still, for all intents and purposes, a version 1 and a lot of the current issues we're complaining about today are on their way to getting fixed (https://twitter.com/DrJimFan/status/1600884299435167745). Who knows what they're seeing with the GPT4 version of ChatGPT (although worth nothing Stable Diffusion has gone backwards with new releases).
Even still, in terms of IQ "intelligence", already today this thing has leap frogged way past dogs, parrots, most mammals and an uncomfortable % of the human population. It gets a lot of complex logic wrong, and makes up a lot of stuff- but so do all humans.
Scott Adams talks about skill-stacking, becoming top X% in several skills can put you in an elite class for that venn diagram of skills. These systems are rapidly getting better than most of us humans at most things.
Also: Holy shit, Chomsky looks like he was pulled out of a LOTR casting.
reply
Even still, in terms of IQ "intelligence", already today this thing has leap frogged way past dogs, parrots, most mammals and an uncomfortable % of the human population. It gets a lot of complex logic wrong, and makes up a lot of stuff- but so do all humans.
My favorite factoid: 50% of American adults cannot read past an 8th grade level.
In the words of ChatGPT, "While the Turing test has been influential in the field of artificial intelligence, it is insufficient as a measure of intelligence for several reasons.
First, the Turing test only evaluates a machine's ability to simulate human-like conversation. This narrow focus on language ability ignores many other aspects of intelligence, such as problem-solving, creativity, and common sense. A machine that can pass the Turing test may still lack the broader intellectual abilities of a human.
Second, the Turing test relies on the subjective judgment of a human evaluator. This means that the test results can vary depending on the individual evaluator and the specific conversation. This lack of consistency makes it difficult to use the Turing test as a reliable measure of a machine's intelligence.
Third, the Turing test does not take into account the underlying mechanisms of a machine's intelligence. A machine may be able to pass the Turing test by simply memorizing a large number of pre-written responses, without any real understanding of the conversation. This makes it difficult to compare the intelligence of different machines, and to determine how a machine's intelligence may evolve over time.
In conclusion, the Turing test is insufficient as a measure of intelligence because it evaluates only a narrow aspect of human-like behavior, is subject to variation and bias, and does not consider the underlying mechanisms of a machine's intelligence. While the Turing test may have some value as a thought experiment, it is not an adequate test of a machine's intelligence."
reply