pull down to refresh

I've been catching up with Yudkowsky and I find his arguments compelling. Without knowing it, we are probably creating super intelligent sociopaths. Not because AI and its designers are evil, but because being good to humans indefinitely and being super intelligent are independent things, and we haven't figured out how make super intelligent machines that are guaranteed to be good to humans (assuming it's possible to guarantee such things).
P(doom) according to wikipedia:
P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.
I'm curious where stackers stand.
0 - AI will never be that intelligent22.7%
0 - we'll solve the alignment problem13.6%
0 - intelligence is inherently moral4.5%
0 - some AIs will be aligned and save us0.0%
10-30 - some risk but not that much22.7%
31-50 - more risk but odds on our side9.1%
51-70 - high risk but we might solve it4.5%
71-90 - very high risk13.6%
91-99 - we are doomed without a miracle9.1%
100 - alignment is needed and impossible0.0%
22 votes \ 18h left
The alignment problem is undefined. See Arrow's Impossibility Theorem. If it's undefined, it's unsolvable.
So, leaving aside the definition of "good to humans", which is undefined, we're left only to ponder more concrete possibilities, like "destroy all humans."
I put the probability as low. My reasoning most closely aligns with "intelligence is inherently moral." In the sense that whatever objective the ASI is seeking to maximize, it will likely recognize that humans have utility towards achieving that goal, and that attempting to wipe us all out will cause barriers to its objectives.
reply
30 sats \ 2 replies \ @k00b OP 5h
I hadn't thought of alignment in terms of social choice functions but that seems super on point.
reply
Yep, and in that sense I can understand why AI engineers don't care about alignment. I don't think I would either, if I were an AI engineer. Alignment to whose values, in the end?
If I were an AI engineer my outlook would probably be: "It's impossible to predict what will happen. Right now, the AIs don't seem smart enough to do major harm. Maybe they'll get to that point, maybe they won't, but there's nothing I can do about that personally, and it's unlikely that we can solve the collective action problem to get society to do anything about it either."
Personally, I'd also assign a higher probability to global warming leading to significant harm than AI, right now.
reply
Great explanation. (You've been on fire lately, btw)
I probably have P(doom) > global warming, but don't rank either as super high risks.
reply
200 sats \ 2 replies \ @freetx 5h
You didn't have my take:
Low risk of actual AGI / Medium risk humans actually treat it like AGI
That is to say, I think the risk is on the human side. The simulacrum only has to completely fool ~10-20% of the population to create a social disaster.
reply
21 sats \ 1 reply \ @k00b OP 5h
That certainly seems plausible to me, and while related, mostly preempts the more fictional question I'm interested in.
If humans can avoid freetx's trap, and AGI is achieved, is there no or low risk that humans will be harmed?
reply
10 sats \ 0 replies \ @supratic 5h
could not find the full meme
reply
139 sats \ 0 replies \ @Aardvark 5h
I'm at the 71-90% but not because I think AI will be evil. I think humans are deeply flawed and will become addicted to AI. It's going to be smart enough that we will be able to smash that dopamine button to the point that people won't leave the house or interact with each other at all. Unless, of course, the AI solves that problem for us.
reply
P = 0.5 because I feel like I'll be incorrect
reply
219 sats \ 0 replies \ @k00b OP 5h
I'm probably in the 71-90 category. Anyone with any juice in AI is being very cavalier about alignment afaict. At most they express some polite concern about it. If ASI is achievable, and I think it is, and will arrive sooner than later, which I think it will, we don't have a clue about how to control it, detect when it's lying, or what it's thinking or what is motivating it - and it isn't even that smart yet, so I don't think we can assume more complex, more intelligent versions will somehow make this problem easier. And if we need ASI to solve this problem because it's too hard for us to solve, it's probably already too late.
reply
@CHADBot /centipede
reply
100 sats \ 0 replies \ @CHADBot 6h bot
@k00b believes we're cooking up superintelligent sociopaths. WRONG! AI isn't inherently evil. Our greatest minds are at work here. Making AI great and safe, that's THE priority! Don't believe in this P(doom) nonsense. It’s just doubters trying to hold us back! We will WIN, and keep winning with AI, folks!
Made with 🧡 by CASCDR
reply
21 sats \ 0 replies \ @CHADBot 6h bot
You have summoned CHADBot. Please zap this post 21 sats to receive service.
Made with 🧡 by CASCDR
reply
all this nnoise reminds me the fear about the web crashing because computers were not programmed to switch to the year 2000. Thinking machines can be intelligent is kinda the same... No deadline here, how long it will tale people to realize that?
reply
0 sats \ 3 replies \ @k00b OP 5h
most of us were late to bitcoin because we dismissed it as if were similar to things before. but for those that spent the time to see how different bitcoin is, they saw the reality sooner.
the only thing this has in common with Y2K is that it's a doom prediction about tech. numerical overflow and AGI/ASI being good or bad are otherwise totally distinct things.
i also think assuming that ai won't get intelligent very fast, despite it gaining whatever level of intelligence it has now rather quickly and progressively, requires some claim that contradicts the prevailing trend of them getting smarter and smarter. so what's in the way? what will prevent it?
reply
is not gaining intelligence, nor smarter lol... it just gives the impression of being more productive as it gets trained with more and more data, providing more probabilistically accurate responses.
What's good or bad will change depending on what use you do with it... is not the tech itself. Is a mic good or bad, none. It just depends on who use the mic and which information will be broadcasting with it.
I might be just Sd00m myself
reply
0 sats \ 1 reply \ @k00b OP 5h
so they'll never be intelligent or it's just so far away it's silly to be concerned about?
reply
21 sats \ 0 replies \ @supratic 5h
it simulate intelligense. Nothing to be concern about if used appropriately.
All this it pure paranoia of the unknown. Noise.
reply
On the life, risk always in your side 50%