pull down to refresh

I certainly wouldn’t consider it “life” or consciousness. I’m starting to see a lot of the champions of AI quietly change their definitions of what AI is or should be. The original vision was the evolution of a new lifeform, a superintelligence, a kind of digital god. Now the cheerleaders are beginning to set aside the requirements of self awareness and consciousness, I suspect because they know it’s not going to happen.
But if this is the case, why would AI be a threat to civilization? If it’s just a novelty and not alive, what damage could it possibly do? It’s not so much that AI will turn on us or send out an army of robots to kill us; the real danger is that we will be tricked into believing that it really is all-knowing. If we rely on such faulty tech too much it could destroy us merely by giving us bad information and making us lazy.
Here are three possible consequences of AI that concern me the most; consequences which I don’t think most people have considered…
This is pointers to some original thinking. It is a bit surprising that these situations could be possibilities. I especially liked this quote:
“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should…”