pull down to refresh

Artificial General Intelligence or AGI has been a topic of contention ever since AI and everything surrounding it became a mainstream conversation point. AGI is essentially a hypothetical capability of AI systems to emulate all the cognitive abilities of the human mind. While the idea has triggered excitement and fear in equal measure, several tech bosses have tossed their ideas around this superintelligence. In one of his recent interviews, Altman said that GPT-3, and GPT-4 were all steps towards AGI. In his interaction with The Wall Street Journal, Altman said that “‘affordable, abundant energy and AGI” will matter the most in the next decade to improve the human condition. He went on to share his thoughts on why one should not fear AGI. “We’ll be able to express ourselves in new creative ways. We will make incredible things for each other, for ourselves, for the world, for this unfolding human story,” he was quoted as saying by the journal.

Artificial general intelligence (AGI) is a hypothetical type of artificial intelligence that would have the ability to understand and perform any intellectual task that a human being can. My question is, are we close to artificial intelligence having its own will or is still a long way off? What should those limits be?
“Thou shalt not make a machine in the likeness of a human mind.”
I just want R2D2 and TARS, not Ultron.
reply
“Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.”
reply
AGI is not an IF, but a WHEN.
Everything about a human being happens inside a brain and only a brain. And a brain is just a bunch of cables. It's big and has a complex structure, but it still is only that.
What we call "consciousness" and "free will" derive from this specific system (and from nothing else). They are "side effects" of the specific organization of neurons.
Once we have an artificial entity able to replicate this structure and organization, then it will produce the same effect. Same logic, same cause, same result.
AGI will necessarily demonstrate a form of consciousness (even if most people, due to religious bias but also to the technical difficulty to prove consciousness), will reject the idea. But it will not change the fact that AGI will equal and even surpass human capabilities.
reply
Your guess is as good as mine, but I’m just afraid of the “gradually, then suddenly” phenomenon. Right now, we can profess to be in control because at the core of it all, Generative AI is just a sophisticated predictive tool in spite of its impressiveness. But what if developers turn to programme traits like empathy into it, just like how Japan has been doing for decades with its robots? When Generative AI learns to read human emotions and respond to them, who knows when’s the crucial turning point when all the dots connect and it starts thinking according to its will? It will probably happen faster than we all would like.
reply