Most of you have probably run into some Yudkowsky work online, possibly without knowing it. He's been one of the most prolific voices on the internet, especially on the technical side. As the founder of Less Wrong, he's been broadly credited with founding the AI alignment movement, and has written a lot about rationality and altruism too. He's more popularly known for being an AI doomer. In his Time Magazine Op-ed, he speaks with a unique level of graveness about misaligned intelligence:
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
His view is that we're most certainly doomed if we don't solve the alignment problem before building increasingly intelligent machines. He's not soothed by most of the naive and hopeful solutions to alignment: we'll just use one AI to align another, or that it'll inherit our ethics from the internet's text, or we can trust it to do only what we say, or intelligence is inherently good-willed, etc.
Anyway, I hadn't fully acquainted myself with the full breadth Yudkowsky's AI doom, and after posting this I got the urge to. This is a honker of podcast and I still haven't gotten through it all. Yudkowsky is frustrated through quite a lot of it, but it's a nice index into his thinking in the least.