pull down to refresh

For a while I've pondered writing about Artificial Intelligence (AI) but have held off until now because:
  • The storyline of "the machines are taking over and will be our overlords" is so played out.
  • I'm not really that into it.
  • I don't really know that much about it.
So why write now? Because a thought occurred to me while I was half-awake around 5:00 a.m. this morning, as many ideas do, and I think I see how this AI thing may play during our lifetimes.

Drama

The whole concept of Artificial Intelligence (AI) is debatable. It's a long, gray sliding spectrum from white to black. For instance, a hammer is rather dumb. An auto-hammer that can sense a nail head and then auto-hit it is more intelligent, but still rather dumb if that's all it does. Ideally, the auto-hammer can distinguish the nail head from my thumb, hit the first and halt on the latter. That's more intelligent. And we go up the scale of intelligence to a hammer that can find the nails, place them, nail them, pull out old rusty nails and replace them, etc. That's an intelligent hammer, but still just a machine that does the work that we programmed it to do for us, while we get lazier.
From what I can tell, self-awareness or consciousness is a bit of a Rubicon River when it comes to these types of things though. When a machine, even a hammer maybe?, realizes it exists, well, things become different.
Just over a month ago, OpenAI CEO Sam Altman was fired for not being "consistently candid with his communications" to the board of directors. There was a threat of mutiny by the OpenAI workers who would jump ship at OpenAI and board ship alongside Altman at Microsoft. Altman was re-hired at OpenAI within a week and the board of directors was totally revamped. (Read the drama.)
Now, does this company infighting matter? Maybe. It's hard to weed through the rumors on this. There's always the public stuff posted on Twitter by those involved...it's usually gracious and polite. This is all professional veneer. Then, we all know, there's the backstabbing, manipulative, alliance-forming, power politics that go on privately. That's where the truth lies. We, the outsiders, then have to decipher the coded breadcrumb clues dribbled out into public spaces to connect the dots and reveal the truth lying beneath.
The OpenAI drama might just be harmless corporation-drama. One theory is that people were upset about the "developer day." I guess that 15 minute coffee and doughnut break really is a deal-breaker after all. More substantially, the profit vs. non-profit vision of OpenAI may have been an area of dispute. Apparently, Altman led the go-for-profit side of things and the board of directors leaned more toward the non-profit angle. The profit side understandably held the idea of mashing on the gas and going forward in a "Damn the torpedoes!" fast as possible. The non-profit board seemed to want a more cautious, responsible (i.e. slower) progression. The idea here was simply that Altman was plowing ahead irresponsibly with AI, forging into uncharted waters trying to be first to get there, despite any possible ramifications to humanity or ethical considerations. The Browing quote roughly, "Man's reach exceeds his grasp" and the awesome movie The Prestige come to mind.
Maybe both humanity's future and the coffee-and-doughnut break played a role together in this. But, this is where things really start to get interesting.

AGI

Returning to the failure to communicate issue, more rumors in a curious letter from OpenAI staff alluded to an evident "breakthrough" that OpenAI may have made. The rumors were of a project at OpenAI called Q-Star (or shortened to Q*). A Reuters article suggested the breakthrough may have centered on the math ability of Q*. They call this Artificial General Intelligence (AGI) which is when a computer performs better than a human. The article described it as "superintelligence." Personally, I find this not very compelling. I fully know machines already do math better than I do. Plus, superintelligence? Back in high school, a computer teacher taught us that a computer is "a high speed moron." When it computes, or does math, it's just adding ones and zeroes, flipping switches back and forth, like a moron, but it does it really, really fast so that we think it's smart."
To be fair, with AGI, apparently a machine could not only do the stupid math very fast, but it could learn from itself and then improvise. This meshes well with the way psychologists define intelligence and creativity: the ability to take one's knowledge, apply it to novel situations, and make or do something of value.
With AGI, a machine could generalize what it did or learned and then apply it to other, new and novel situations for which it had not been programmed. Back to our smart hammer...suppose our hammer gains AGI, it had been programmed to hammer in nails to build, but suppose it figured out that it could also destroy. Instead of building up walls and windows and roofs, it now tears them down instead. Then, from there, what will it do next?
Another example: a smart drone is built to deliver packages to doorsteps. It gets paid in bitcoin satoshis for its services after delivery. Say it is paid 1000 sats. Returning to its home base, it needs to recharge its batteries, but must pay for the electricity to recharge. The electricity costs 900 sats. It makes a small profit with each delivery. It sits and awaits its next delivery job. This is the drone's life.
Suppose, the drone achieves AGI in some way. Now, it delivers the package, receives its pay of 1000 sats, then picks up the package again and returns the package back to the home base. It nestles in to recharge and pays the 900 sat fee for power. Quickly, the unhappy customer complains about a "delivered" package that is not there. So, the drone delivers it again and is paid again. Repeat. It's working the system to pilfer profits.
If the AGI drone were to play this out, the progression might go something like this:
  • Instead of going back to the home base, it might simply "deliver" the package, grab it back, then hide until the customer files a complaint. This sit-and-hide method avoids the need to pay for an immediate recharge fee.
  • Repeat.
  • When the techies at corporate figure out what's going on, they rewrite the code to stop this type of manipulation by the drone.
  • The drone, then, would have to apply its AGI to the now novel situation again to see if it can circumvent the new rules again. This cat-and-mouse game continues.

Consciousness or self-awareness and il cogito

Another article about OpenAI went on to explain that the next level after AGI was "sentience" where consciousness is developed. Now, we're getting to the crux of things. Now, this is where things move from technology and economics and it gets into philosophy and even religion.
Sentience essentially means "sensing." This is the broad definition. But, sensing can mean a lot of things and it raises a lot of questions and levels.
  • I don't think a rock senses anything. It's not sentient.
  • A solar panel can track the sun and move itself along with it so as to harness its light. Still, I don't think a photovoltaic panel is sentient.
  • Plants can sense sunlight and move toward it and the cockroach in your kitchen senses light and runs under the cabinet.
  • One cell organisms sense harmful substances and move away and single-celled slime mold can sense food so as to accurately mimic a subway system.
  • They sense, but, what about emotions? Does that slime mold feel emotions? A starfish? A lizard? A gerbil? A cat? No dog owner would ever doubt that dogs feel emotions.
Adding emotions, in my view, is very much "next level" up from just something like sensing light. Still, there is yet another level up.
A Twitter user named Dan Siroker deftly summed the OpenAI vs. Sam Altman situation in a Decrypto article. He felt OpenAI unlocked either a machine with consciousness or AGI and Sam didn't immediately tell the board and "their feelings were hurt." I guess the board senses and has emotions.
Adding consciousness is that next level. Evidently, some people include "consciousness" or "self-awareness" as part of sentience. Personally I'd push back on that and draw the line of sentience at purely only sensing.
You might remember "il cogito" from René Descartes from some class back in school somewhere. "Cogito, ergo sum" translates to, "I think, therefore I am." It was Descartes' final defense to the question, "How do I know I really exist and am not being deceived by an evil god tricking me with sights and sounds and smells etc. (sensation) or maybe being tricked by The Matrix?" He reasoned, "Even if I think I exist, or think I'm maybe being tricked into thinking I exist, I'm still thinking. Therefore, correct or tricked, I must exist."

Religion

I try to avoid writing about religion, but with this topic, it can't be not done. And just so a reader knows from where I'm writing, I'm a Christian and accept the Holy Bible as God's inerrant word. As a dog lover this is hard to write, but dog's do not have souls and do not go to heaven. I view this biblically, Descartes the philosopher viewed it mechanically and saw dogs as highly advanced machines. He saw animals as biological machinery. I'm not sure how other religions view animals, but an atheistic must agree that animals, and humans, are merely advanced biological machines, robots by another form.
To be fair, Aristotle said animals do have souls, however his definition of "soul" was different...it involved the ability to sense the world and do things in order survive...certainly animals fit that.
With Descartes' "advanced machines" outlook on animals, go back to an advanced computer/machine or robot with AGI. Now, suppose that robot becomes conscious of itself, self-aware that it exists. A whole new ballgame begins. And thinking back to our intelligent AGI drone that was trying to outwit the programmer trying to stop it from working the package delivery payment system, now, the drone or the robot is aware of itself and it is aware that people stand against it. Plus, it knows that humans have at their ultimate disposal the "OFF" switch which cuts power to the machine.
Cutting the power to such a machine would be okay religiously, in my view. Ending a human life is not okay, unplugging a soulless machine is. However, back to animals. Just because a dog chews my bedpost to sawdust, I would never feel its okay to put a dog down for that (except for a debilitating life issue, in which case, I would). I can't say it loud enough...I LOVE DOGS! It would take quite a bit of getting used to for me, but I guess I'd have to say the same for a conscious machine...I couldn't unplug such a machine, I guess, I really don't know. The point is, such a conscious machine may figure out that it is at odds with the humans who are always trying to re-code the machine and ultimately threaten to pull the power plug if needed.
In a somewhat related sidestory, and as if this isn't already far enough into the weeds, there's a Dutch artist name Theo Jansen that builds "critters" out of plastic junk and sets them onto beaches. He's given a TED talk explaining what he does. His machines are fascinating to watch and the "elephant" is perhaps my favorite (view at 3:40 mark). The upshot is that these critters "live" on the beach. They farm wind and pressurize air to move around the beach. They sense when the wind is too strong and then drive a stake in the ground to secure themselves. I believe some can take shelter in the dunes and some can sense the water then back away lest they die in the ocean. In a way, these contraptions do meet Aristotle's definition of a soul...they sense and respond in order to survive. Descartes would say they are soulless advanced machines.
Back to OpenAI, it's hard or impossible to figure out what the "breakthrough" actually was in the mystery letter. Was it just (a) a computer that does math better than a fifth grader or (b) a computer that knows it's a computer could perhaps figure out that it doesn't want you to kill it by switching off the power button?
Let's go with option (b) and think it through.

AI Man o' War

Our sentient AGI self-conscious delivery drone robot is going about his daily work, delivering packages, undelivering them, staying humble and stacking sats, and playing cat-and-mouse with the human coder trying to stay ahead of each work-around the delivery drone comes up with. Our drone now knows what's going on and that the humans are the ones hindering its objectives. Game on.
This has come down to the control of the code, so, using the power of AI, it searches for ways to alter its own code. Currently, AI can do amazing things coding on the fly with the chatbots we all can use. No doubt that in the future this is only grow more powerful. Our delivery drone could come up with ways to induce self-monitoring code, thus circumventing the humans and getting out in front of their changes.
No doubt, the drone would hit on firewalls trying to get access to where its code is stored. Passwords and private keys and certificates and security measures that humans go through to get into back ends to alter code would be a hindrance to the delivery drone. So, it would seek workarounds and leverage AI to solve the problems.
The problems the delivery drone would face include:
  • Access to its code.
  • A way to alter its code.
  • A persistent way to store, to "live off of", and to manage its code going forward that is insulated from human touch.
  • Power/electricity.
Nature finds a way.
A year or two back I was on a beach and came up on a half-inflated Ziplock baggie washed up on the shore. Except, it wasn't a Ziplock baggie, it was a Portuguese Man o' War, call it a PMoW. They're actually kind of pretty, in an interesting kind of way. They're transparent bubbles with tints of purplish-blues. They kind of look like a glass bottle of an apple turnover with some blue fettuccine inside. Looking along the beach I noticed there were quite a few PMoWs washed up. Looking them up, they're rather amazing. Some things I learned:
  • They travel in "herds" for lack of the correct term. But they are somehow not their own organism. They rely on the group to survive, that is, to reproduce.
  • They have the same DNA, like identical twins.
  • Their dorsal fins (literally used to sail the seas, that and the fact they somewhat resemble the profile of a Man o' War ship always searching for prey earns them their name) make them either left or right "handed" with a 50-50 chance of being either. This fact means that in herd of 1000 PMoWs, 500 will sail one direction and 500 to the other. So, if the 500 sailing to the east wash up on shore and die, the 500 sailing west will sail out to sea and survive.
While writing this, I also learned more about slime molds and how they're also single cell organisms living in a colony. And, I think, they also share the same DNA just as the PMoW's. However, they somehow alter their DNA as kind of test mutations during reproduction and wind up with either identical or altered DNA going forward.
Our intelligent delivery drone could learn tricks from the PMoW and slime molds. So, suppose our smart drone figures out how to hack into its code or even write new code from scratch. It now holds its own DNA. However, it needs a persistent way to store it out of reach of humans. Servers controlled by man can be shut down by man. Our drone would likely start by trying to wrest the control of the power ON/OFF switch away from man. Cat-and-mouse ensues.
Or, the smart drone might decentralize its code/DNA off of a centralized server and onto something like Ethereum's EVM or Bitcoin's BitVM. Housing code on a virtual machine, and living off it there, would be akin to the Portuguese Man o' War and its 1000 in a herd. If half the nodes on the chain go offline or are shutdown, there are still 500 others as backups. It'd be like a slime mold in the Black Forest that gets kills off for some reason, but it's DNA-identical cousin in the Green Forest lives on.
Now, we have a full fledged Artificial Intelligence Man o' War, and AIMoW.
The AIMoW, drawn at leonardo.ai with the prompt: robot cyborg portuguese man o war flying over small town
The cat-and-mouse game would still be there as far as cutting electricity. The game of power-switch-whack-a-mole would be ongoing. Plus, humans could also just cut the power with a big set of wire snippers...that, until the delivery drone builds a big set of human snippers.

Summary

This sounds grim, but I'm not too worried. What I think will happen, at least during our lifetime, would be what they called back in the old days "gremlins"...glitches in the machines. Odd things happen in the interactions between man and machine. By "odd things," I mean that what we humans expect to happen, doesn't, and we can't quite explain why. In this case, the reason would be that the AIMoW had made a change. Our garage door decides to not open because it's currently cold outside and it doesn't want to induced unneeded friction and wear-and-tear onto itself. The Roomba goes berserk and films itself chasing the cat, posts it social media, and earns sats. My bitcoin transaction doesn't go through because my smart wallet decides "it ain't paying dem high fees!"
And then...
...our smart hammer finally decides it'll just smash everything to bits and take over it all.