I've been catching up with Yudkowsky and I find his arguments compelling. Without knowing it, we are probably creating super intelligent sociopaths. Not because AI and its designers are evil, but because being good to humans indefinitely and being super intelligent are independent things, and we haven't figured out how make super intelligent machines that are guaranteed to be good to humans (assuming it's possible to guarantee such things).
P(doom) according to wikipedia:
P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.
I'm curious where stackers stand.
0 - AI will never be that intelligent25.0%
0 - we'll solve the alignment problem8.3%
0 - intelligence is inherently moral5.6%
0 - some AIs will be aligned and save us0.0%
10-30 - some risk but not that much13.9%
31-50 - more risk but odds on our side11.1%
51-70 - high risk but we might solve it11.1%
71-90 - very high risk13.9%
91-99 - we are doomed without a miracle11.1%
100 - alignment is needed and impossible0.0%
36 votes \ 30m left