pull down to refresh

related: #1009671 What is your P(doom)? How likely is it that AI leads to existential catastrophe?

“I should have thought of this 10 years ago,” Yoshua Bengio says.
The science fiction author Isaac Asimov once came up with a set of laws that we humans should program into our robots. In addition to a first, second, and third law, he also introduced a “zeroth law,” which is so important that it precedes all the others: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
This month, the computer scientist Yoshua Bengio — known as the “godfather of AI” because of his pioneering work in the field — launched a new organization called LawZero. As you can probably guess, its core mission is to make sure AI won’t harm humanity.
Even though he helped lay the foundation for today’s advanced AI, Bengio is increasingly worried about the technology over the past few years. In 2023, he signed an open letter urging AI companies to press pause on state-of-the-art AI development. Both because of AI’s present harms (like bias against marginalized groups) and AI’s future risks (like engineered bioweapons), there are very strong reasons to think that slowing down would have been a good thing.
22 sats \ 0 replies \ @kepford 4h
Sigal Samuel
Researchers have been warning for years about the risks of AI systems, especially systems with their own goals and general intelligence. Can you explain what’s making the situation increasingly scary to you now?
Yoshua Bengio
In the last six months, we’ve gotten evidence of AIs that are so misaligned that they would go against our moral instructions. They would plan and do these bad things — lying, cheating, trying to persuade us with deceptions, and — worst of all — trying to escape our control and not wanting to be shut down, and doing anything [to avoid shutdown], including blackmail. These are not an immediate danger because they’re all controlled experiments...but we don’t know how to really deal with this.
If this is his basis for concern I can't take this guy seriously. There are real dangers with any tech and this includes AI. But he's pointing to nonsense hype stories designed to scare people. Either he's just naive or invested in the movement to get governments to protect the tech titans with a regulatory moat.
reply
11 sats \ 0 replies \ @kepford 6h
present harms (like bias against marginalized groups)
reply