pull down to refresh
1446 sats \ 3 replies \ @SimpleStacker 10h \ on: What is your P(doom)? How likely is it that AI leads to existential catastrophe? AskSN
The alignment problem is undefined. See Arrow's Impossibility Theorem. If it's undefined, it's unsolvable.
So, leaving aside the definition of "good to humans", which is undefined, we're left only to ponder more concrete possibilities, like "destroy all humans."
I put the probability as low. My reasoning most closely aligns with "intelligence is inherently moral." In the sense that whatever objective the ASI is seeking to maximize, it will likely recognize that humans have utility towards achieving that goal, and that attempting to wipe us all out will cause barriers to its objectives.
reply
Yep, and in that sense I can understand why AI engineers don't care about alignment. I don't think I would either, if I were an AI engineer. Alignment to whose values, in the end?
If I were an AI engineer my outlook would probably be: "It's impossible to predict what will happen. Right now, the AIs don't seem smart enough to do major harm. Maybe they'll get to that point, maybe they won't, but there's nothing I can do about that personally, and it's unlikely that we can solve the collective action problem to get society to do anything about it either."
Personally, I'd also assign a higher probability to global warming leading to significant harm than AI, right now.
reply
Great explanation. (You've been on fire lately, btw)
I probably have P(doom) > global warming, but don't rank either as super high risks.
reply