I've been catching up with Yudkowsky and I find his arguments compelling. Without knowing it, we are probably creating super intelligent sociopaths. Not because AI and its designers are evil, but because being good to humans indefinitely and being super intelligent are independent things, and we haven't figured out how make super intelligent machines that are guaranteed to be good to humans (assuming it's possible to guarantee such things).
P(doom) according to wikipedia:
P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.
I'm curious where stackers stand.
0 - AI will never be that intelligent22.7%
0 - we'll solve the alignment problem13.6%
0 - intelligence is inherently moral4.5%
0 - some AIs will be aligned and save us0.0%
10-30 - some risk but not that much22.7%
31-50 - more risk but odds on our side9.1%
51-70 - high risk but we might solve it4.5%
71-90 - very high risk13.6%
91-99 - we are doomed without a miracle9.1%
100 - alignment is needed and impossible0.0%
22 votes \ 18h left