pull down to refresh

Yep, and in that sense I can understand why AI engineers don't care about alignment. I don't think I would either, if I were an AI engineer. Alignment to whose values, in the end?
If I were an AI engineer my outlook would probably be: "It's impossible to predict what will happen. Right now, the AIs don't seem smart enough to do major harm. Maybe they'll get to that point, maybe they won't, but there's nothing I can do about that personally, and it's unlikely that we can solve the collective action problem to get society to do anything about it either."
Personally, I'd also assign a higher probability to global warming leading to significant harm than AI, right now.
Great explanation. (You've been on fire lately, btw)
I probably have P(doom) > global warming, but don't rank either as super high risks.
reply