pull down to refresh
You didn't have my take:
Low risk of actual AGI / Medium risk humans actually treat it like AGI
That is to say, I think the risk is on the human side. The simulacrum only has to completely fool ~10-20% of the population to create a social disaster.
That certainly seems plausible to me, and while related, mostly preempts the more fictional question I'm interested in.
If humans can avoid freetx's trap, and AGI is achieved, is there no or low risk that humans will be harmed?
could not find the full meme
You didn't have my take:
Low risk of actual AGI / Medium risk humans actually treat it like AGI
That is to say, I think the risk is on the human side. The simulacrum only has to completely fool ~10-20% of the population to create a social disaster.