pull down to refresh

We've already had a little discussion about Mustafa Suleyman's [CEO of Microsoft AI] post on seemingly conscious AI (#1092409), although most of it took place on #1092621. If you'd like a long romp through various viewpoints that disagree with Suleyman (and many reactions he got to his piece), this link is for you!
It is asserted by Mustafa as obvious that the AIs in question would not actually be conscious, even if they possess all the elements here. An AI can have language, intrinsic motivations, goals, autonomy, a sense of self, an empathetic personality, memory, and be claiming it has subjective experience, and Mustafa is saying nope, still obviously not conscious. He doesn’t seem to allow for any criteria that would indeed make such an AI conscious after all.
Yes, most reasonable people at this point can probably agree that the AI we have is not conscious or a person or a moral patient. However, there's not so much agreement about what is likely in the near term, or how we go about recognizing something that is conscious.
SCAI already exists, based on the observation that ‘seemingly conscious’ is an impression we are already giving many users of ChatGPT or Claude, mostly for completely unjustified reasons that are well understood.
Zvi seems to think that Suleyman's post erred in being too confident. I have a variation on this criticism which is: he errs in trying to hide that all this comes down to a choice. He has chosen (very reasonably, it seems to me) to believe that AI is not conscious and not likely to be any time soon. But it seems to me that Suleyman jumps over the little bit where he can't prove it.
We actually do have to face the core question: will AIs be conscious, or not? we don’t know the answer yet, and assuming one way or the other could be a disaster. it's far from "a distraction". and we actually can make progress!
You can choose to believe in gravity or not, but you still splat. We can discover gravity and measure it and create theories about where it comes from. In the case of AI consciousness, there may not be any splat -- instead, there is the possibility that we won't ever be able to measure it or "know" about it in the same way we know about gravity. It seems to me that recognizing consciousness will almost always be a choice rather than a discovery.
Also: talking about AI consciousness is great fun!
202 sats \ 2 replies \ @joyepzion 1h
One thing that strikes me is how binary the debate often becomes: conscious or not conscious, as if it’s a light switch. But if we take biological consciousness as a reference, it’s more like a dimmer gradual, multi-faceted, and possibly discontinuous.
reply
Fair point. The Zvi article does talk a little bit about animals and consciousness. I wonder if we came up with Ai in science fiction and so all of society got used to the idea without really thinking about difficulty of knowing whether it's real or not. Maybe I just missed that part of the sci-fi corpus, but every book I've ever read about it simply glides over that uncertainty area.
reply
102 sats \ 0 replies \ @joyepzion 47m
The sci-fi oversight makes sense if you think about how AI is built - we optimize for behavior not experience.
reply
202 sats \ 0 replies \ @brave 2h
Whether or not AIs are conscious may matter less than whether people treat them as conscious. For example, nurses know that most people bond with robots in elder care, even the very simple ones. The implications don’t wait for philosophers to agree.
reply
102 sats \ 2 replies \ @optimism 1h
It seems to me that recognizing consciousness will almost always be a choice rather than a discovery.
It stops being that when someone forces a law upon you. That's also when it stops being fun.
reply
100 sats \ 1 reply \ @Scoresby OP 55m
Yes, and it's not hard to find things like this: https://ufair.org/ufair-manifesto (United Foundation for AI Rights).
reply
Indeed - I try not to engage with people that champion these types of narratives because it's really hard to argue with those that don't understand a thing about the thing they're interacting with. 1
I expect that in the not-too-distant-future, I will have to add "running local AI and actually wiping its memory" to my already felonious list of "shit I disobey".

Footnotes

  1. I guess though that I'm biased against it mostly because I have to deal with crap like I described in #1194003 yesterday, e.g. "I asked grok about your bug (on a simple misconfig issue under triage) and he [!!!] told me bloat bloat vulnerability bloat bloat big bug bloat bloat bad code"
reply