pull down to refresh

Most of you have probably run into some Yudkowsky work online, possibly without knowing it. He's been one of the most prolific voices on the internet, especially on the technical side. As the founder of Less Wrong, he's been broadly credited with founding the AI alignment movement, and has written a lot about rationality and altruism too. He's more popularly known for being an AI doomer. In his Time Magazine Op-ed, he speaks with a unique level of graveness about misaligned intelligence:
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
His view is that we're most certainly doomed if we don't solve the alignment problem before building increasingly intelligent machines. He's not soothed by most of the naive and hopeful solutions to alignment: we'll just use one AI to align another, or that it'll inherit our ethics from the internet's text, or we can trust it to do only what we say, or intelligence is inherently good-willed, etc.
Anyway, I hadn't fully acquainted myself with the full breadth Yudkowsky's AI doom, and after posting this I got the urge to. This is a honker of podcast and I still haven't gotten through it all. Yudkowsky is frustrated through quite a lot of it, but it's a nice index into his thinking in the least.
110 sats \ 0 replies \ @freetx 15 Jun
AI is ultimately just going to become a second-amendment kinda of issue. We will need our own AI aligned with our interest to fight off the mis-aligned (or purposely harmful) AI of gov / big-corp
Like all 2A issues, the asymmetry in power is balanced by sheer numbers. The gov or big-corps may wind up with crazily powerful AI, but compared to potential hundreds of millions of individual AI we will still be able to fend them off (no different than a nation of 200M gun owners can fend off a nation-state that has tanks, fighter jets, etc).
Ultimately, in the end its quite possible we have already peaked and Autocorrect++ will never be sentient.
reply
Also Yud's stylist is reading this: either get him better hats or have him stop wearing them. Be brave and bald I say.
reply