The original goal of the AI field was to create machines with general intelligence comparable to that of humans. Early AI pioneers were optimistic: In 1965, Herbert Simon predicted in his book The Shape of Automation for Men and Management that “machines will be capable, within twenty years, of doing any work that a man can do,” and, in a 1970 issue of Life magazine, Marvin Minsky is quoted as declaring that, “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.”
This pursuit remained a rather obscure corner of the AI landscape until quite recently, when leading AI companies pinpointed the achievement of AGI as their primary goal, and noted AI “doomers” declared the existential threat from AGI as their number one fear. Many AI practitioners have speculated on the timeline to AGI, one predicting, for example, “a 50% chance that we have AGI by 2028.” Others question the very premise of AGI, calling it vague and ill-defined; one prominent researcher tweeted that “The whole concept is unscientific, and people should be embarrassed to even use the term.”