related
:
#1009671
What is your P(doom)? How likely is it that AI leads to existential catastrophe?
“I should have thought of this 10 years ago,” Yoshua Bengio says.
The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a “zeroth law,” which is so important that it precedes all the others: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
This month, the computer scientist Yoshua Bengio — known as the “godfather of AI” because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won’t harm humanity.
Even though he helped lay the foundation for today’s advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI’s present harms (like bias against marginalized groups) and AI’s future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing.