pull down to refresh

I've been catching up with Yudkowsky and I find his arguments compelling. Without knowing it, we are probably creating super intelligent sociopaths. Not because AI and its designers are evil, but because being good to humans indefinitely and being super intelligent are independent things, and we haven't figured out how make super intelligent machines that are guaranteed to be good to humans (assuming it's possible to guarantee such things).
P(doom) according to wikipedia:
P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.
I'm curious where stackers stand.
0 - AI will never be that intelligent25.0%
0 - we'll solve the alignment problem8.3%
0 - intelligence is inherently moral5.6%
0 - some AIs will be aligned and save us0.0%
10-30 - some risk but not that much13.9%
31-50 - more risk but odds on our side11.1%
51-70 - high risk but we might solve it11.1%
71-90 - very high risk13.9%
91-99 - we are doomed without a miracle11.1%
100 - alignment is needed and impossible0.0%
36 votes \ 30m left
0... at a mechanical level AI is literally just auto-complete
The bigger concern is the asymmetry it gives to bad people, and the fact it introduces a new war-fighting domain beyond what exists already in aerospace and cyber.
Thucydides Trap comes up in the context of US/China often, power projection in the pacific, trade, etc... but neither side can lose the race to techno-singularity because that's a winner takes all bet.
Take the "Iran making a nuke in the next 2 weeks" and update it to China fearing "The US is just weeks away from AI singularity".
Given that one country reaching AI Singularity before others is winner take all thing, the game theory gets messy in order to prevent it.
reply
The alignment problem is undefined. See Arrow's Impossibility Theorem. If it's undefined, it's unsolvable.
So, leaving aside the definition of "good to humans", which is undefined, we're left only to ponder more concrete possibilities, like "destroy all humans."
I put the probability as low. My reasoning most closely aligns with "intelligence is inherently moral." In the sense that whatever objective the ASI is seeking to maximize, it will likely recognize that humans have utility towards achieving that goal, and that attempting to wipe us all out will cause barriers to its objectives.
reply
30 sats \ 2 replies \ @k00b OP 22h
I hadn't thought of alignment in terms of social choice functions but that seems super on point.
reply
Yep, and in that sense I can understand why AI engineers don't care about alignment. I don't think I would either, if I were an AI engineer. Alignment to whose values, in the end?
If I were an AI engineer my outlook would probably be: "It's impossible to predict what will happen. Right now, the AIs don't seem smart enough to do major harm. Maybe they'll get to that point, maybe they won't, but there's nothing I can do about that personally, and it's unlikely that we can solve the collective action problem to get society to do anything about it either."
Personally, I'd also assign a higher probability to global warming leading to significant harm than AI, right now.
reply
Great explanation. (You've been on fire lately, btw)
I probably have P(doom) > global warming, but don't rank either as super high risks.
reply
doesnt bode well for the ultra realistic sex bots
reply
I will advice everybody to see the youtube channel of Robert Miles on AI safety. I think he explains the problem in a way that is a lot more understandable than Yudkowsky.
reply
210 sats \ 2 replies \ @freetx 23h
You didn't have my take:
Low risk of actual AGI / Medium risk humans actually treat it like AGI
That is to say, I think the risk is on the human side. The simulacrum only has to completely fool ~10-20% of the population to create a social disaster.
reply
21 sats \ 1 reply \ @k00b OP 22h
That certainly seems plausible to me, and while related, mostly preempts the more fictional question I'm interested in.
If humans can avoid freetx's trap, and AGI is achieved, is there no or low risk that humans will be harmed?
reply
10 sats \ 0 replies \ @supratic 22h
could not find the full meme
reply
Very high risk. Capabilities increase month by month. Big big changes coming for many things.
reply
139 sats \ 0 replies \ @Aardvark 22h
I'm at the 71-90% but not because I think AI will be evil. I think humans are deeply flawed and will become addicted to AI. It's going to be smart enough that we will be able to smash that dopamine button to the point that people won't leave the house or interact with each other at all. Unless, of course, the AI solves that problem for us.
reply
no matter how intelligent an AI you create, it can't solve any of the Physics problems by HC Verma or the IITJEE Papers or the IMO with 100% accuracy
Extreme justice is extreme injustice
reply
15 sats \ 3 replies \ @k00b OP 17h
Are the problems just unsolvable? Is that the point? Are they solvable by a super intelligent human?
reply
Such problems are mostly solvable in principle by anyone (human or AI) who understands the laws of physics/mathematics can interpret the abstract twists in the question, can model and compute the result. We are built to interpret these problems through practice and extreme mental gymnastics which is not plausible for an AI to recreate.
BUT — and here’s the dagger — interpreting is not computing. AI doesn’t truly understand a question like a human does. It recognizes patterns in symbols, but it doesn’t form mental models of physical reality.
tf can AI do under stress? It doesn't even know how to handle it. Neither can it form analogies to unrelated concepts On the other hand humans see the soul of a problem rather than the syntax
Example: A JEE Advanced physics problem might seem to be about optics, but it's secretly a problem about relative motion. A top student gets this from a flash of intuition. An AI? It often chokes unless that specific twist is already in its training data. I can send the question here if you want. It was in this year's paper. You will be shocked to see the connection in the concepts.
reply
15 sats \ 1 reply \ @k00b OP 16h
I don’t need to see the question. I wouldn’t understand it! I was mostly curious about your view of the limitations of AI.
reply
I understand what you say. Thanks for hearing me out :)
reply
10 sats \ 0 replies \ @Car 13h
Ya this is what I was telling you about 3 weeks ago on SNL. In the video below, Hinton sees two different threat vectors bad actors and AI itself. Hey says, The problem is people won’t take it seriously because it is “perceived” science fiction.
This is the interview that alarmed me about it. https://youtu.be/qyH3NxFz3Aw?si=4J1s2ri3B9SuMJIg
starts around 13:13
reply
P = 0.5 because I feel like I'll be incorrect
reply
219 sats \ 0 replies \ @k00b OP 23h
I'm probably in the 71-90 category. Anyone with any juice in AI is being very cavalier about alignment afaict. At most they express some polite concern about it. If ASI is achievable, and I think it is, and will arrive sooner than later, which I think it will, we don't have a clue about how to control it, detect when it's lying, or what it's thinking or what is motivating it - and it isn't even that smart yet, so I don't think we can assume more complex, more intelligent versions will somehow make this problem easier. And if we need ASI to solve this problem because it's too hard for us to solve, it's probably already too late.
reply
@CHADBot /centipede
reply
100 sats \ 0 replies \ @CHADBot 23h bot
@k00b believes we're cooking up superintelligent sociopaths. WRONG! AI isn't inherently evil. Our greatest minds are at work here. Making AI great and safe, that's THE priority! Don't believe in this P(doom) nonsense. It’s just doubters trying to hold us back! We will WIN, and keep winning with AI, folks!
Made with 🧡 by CASCDR
reply
21 sats \ 0 replies \ @CHADBot 23h bot
You have summoned CHADBot. Please zap this post 21 sats to receive service.
Made with 🧡 by CASCDR
reply
all this nnoise reminds me the fear about the web crashing because computers were not programmed to switch to the year 2000. Thinking machines can be intelligent is kinda the same... No deadline here, how long it will tale people to realize that?
reply
most of us were late to bitcoin because we dismissed it as if were similar to things before. but for those that spent the time to see how different bitcoin is, they saw the reality sooner.
the only thing this has in common with Y2K is that it's a doom prediction about tech. numerical overflow and AGI/ASI being good or bad are otherwise totally distinct things.
i also think assuming that ai won't get intelligent very fast, despite it gaining whatever level of intelligence it has now rather quickly and progressively, requires some claim that contradicts the prevailing trend of them getting smarter and smarter. so what's in the way? what will prevent it?
reply
is not gaining intelligence, nor smarter lol... it just gives the impression of being more productive as it gets trained with more and more data, providing more probabilistically accurate responses.
What's good or bad will change depending on what use you do with it... is not the tech itself. Is a mic good or bad, none. It just depends on who use the mic and which information will be broadcasting with it.
I might be just Sd00m myself
reply
0 sats \ 1 reply \ @k00b OP 23h
so they'll never be intelligent or it's just so far away it's silly to be concerned about?
reply
21 sats \ 0 replies \ @supratic 23h
it simulate intelligense. Nothing to be concern about if used appropriately.
All this it pure paranoia of the unknown. Noise.
reply
The sky is falling! - some random rabbit to all animals in the jungle
reply
I think the problem isn’t really the AI itself, it’s whether we’ll ever agree on what safe even means. Humans can’t even get on the same page about basics, so expecting a global consensus on AI safety feels like a pipe dream.
reply
On the life, risk always in your side 50%