pull down to refresh

This article is written by an AI curmudgeon -- you know, those people who are mostly negative about AI, often sour about what it's doing to their industry (usually academics or journalism) and only grudging admit to any usefulness -- but the author gets at something that really bothers me:
OpenAI hardly help. They say their product can “think”, “learn” and “reason”, though if it does these things, it doesn’t do them in the way humans generally recognise. To be less charitable, the company has spent a lot of money on its system, and it doesn’t hurt for the public to believe it’s a wonder-product.

Why is it that I so rarely hear the AI optimists call personification of AI out?

As they say in this article,
is no more capable of lying to you than your car is. It works as designed: they asked it for plausible sentences; it fed some back.
And yet companies like OpenAI routinely publish papers talking about AI "scheming" or doing lots of other human-like mental things. It seems pretty obvious that using such terms muddies the water and makes it more difficult to think about what is actually going on.
Like the author of this article, I can think of a number of reasons that AI companies might do this; what I don't understand is why the AI boosters who do like AI but who are also honest, don't call it out.
Maybe I'm just reading the wrong papers.
The steelman case for not calling out the personification of AI is that we can't prove that we ourselves aren't just pattern recognizing fill in the blank machines....
.... right?
There are also very oddly human behaviors, like how asking it to spend more time reasoning can often lead to more wishy washy, harder to understand, less helpful responses.
reply
so, it sounds like you are saying the steelman is that we don't know that it's not doing more than statistical next-word generation (this is my grug understanding of AI, which will stand in lieu of everything short of consciousness or human-like thinking).
but this is highly unsatisfying to me! I am willing to engage with the idea that there is something more than statistics going on, but I don't think this personification is helping us understand what it is.
also, I'm pretty convinced so far that LLMs don't have any real sense of "understanding" -- something that many personification words imply is present.
reply
No, I'm not saying it's doing more than statistical pattern matching. I'm saying we as humans may not be doing more either. So if we can use words like "understanding" to describe ourselves, why can't we use it to describe AI?
For the record I am not a pure materialist, and I think humans have souls, so I don't think we're just statistical pattern matchers. I'm just trying to steel man the case
reply
Ah! I see.
In that case, I'd say whatever statistical pattern matching humans are doing feels different than LLMs. When I talk with another human, it feels like there is depth to their reasoning. Or, again, there is a thing where you can watch a human "get" something. We have a bunch of phrases to describe this. Maybe what we are doing is still statistical pattern matching, but it seems very different than LLMs.
But, if I'm honest, I'm less interested in this question of what humans are doing or even whether AI as we know it is conscious. It quickly becomes a morass of tautologies and indefinable terms.
What I am curious about is how to think about something that we know is just running a simulation of human language, but could just as easily run a simulation of a world where human language didn't know LLMs or even computers didn't exist. How do we think about this thing that will play any roll we ask it to?
reply
I agree... I find the consciousness question to be quite pedantic. "Declare your beliefs!", I say, and let's move on! Haha
But yeah, in terms of how to think of AI's, I do think it'd be helpful if more people realized that they're glorified fill-in-the-blank / guess-the-next-word machines.
reply
102 sats \ 3 replies \ @kepford 13h
Why is it that I so rarely hear the AI optimists call personification of AI out?
Because they are optimists. The answer is in the question.
reply
I disagree. You can be optimistic about bitcoin and still be clear-eyed about what it is and what it is not.
Same for AI: you can be very optimistic about it without needing to make up stuff about what it is doing.
reply
102 sats \ 1 reply \ @kepford 12h
Sure... but in my experience an optimist is different than a non-optimist (myself) being optimistic about something. I'm optimistic about bitcoin and AI to a much lesser degree. But sir, I am not an optimist. I would say the majority in marketing and tech "journalism" are optimists. A large percentage are, dare I say... naive optimists. The number of realists in these fields is small. AI is just the latest tech trend.
reply
102 sats \ 0 replies \ @kepford 12h
The other thing that just occurred to me is that tech writers would not waste time putting qualifiers on what the AI hype machine says. There is little incentive to do so. It would be like a political journalist saying... well Obama said x but we all know politicians lie. Well, duh. We all know marketers will market. We all know Silicon Valley CEOs will hype their thing. They have done so my entire life.
reply
107 sats \ 0 replies \ @kepford 12h
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
~ Edsger W. Dijkstra
reply
The point raised in the article is an interesting one. I wouldn't dismiss it as merely "nitpicking," but rather as a valuable reminder that the language we use matters. I agree that overusing personification ("AI thinks, reasons, plans...") can be misleading.
reply
0 sats \ 1 reply \ @kepford 14h
I call it out. And this behaviour is in line with Silicon Valley behaviour. A tech can be both useful and misrepresented by its creators for gain. Many of the tech press are like the regular press. They don't want to get cut out.
reply
Thing is. They call it hallucinating instead of lying for a reason. Its not to humanize it. Hallucinating is humanizing it. Lying would also be humanizing it but the wrong word. AI chatbots unlike humans do not know anything. They are no where near humans in how they work or what they can do. They are just different things. Machines.
Some of the criticism of AI is simply reactionary to the hype machine and honestly it is not that interesting any more. Its like complaining that a fork lift isn't fast because the marketing talked about how fast their fork lifts are. Yeah, marketers lie and water is wet.
Good reviews and journalism isn't reactionary but instead focuses on the benefits and trade-offs.
reply