This article is written by an AI curmudgeon -- you know, those people who are mostly negative about AI, often sour about what it's doing to their industry (usually academics or journalism) and only grudging admit to any usefulness -- but the author gets at something that really bothers me:
OpenAI hardly help. They say their product can “think”, “learn” and “reason”, though if it does these things, it doesn’t do them in the way humans generally recognise. To be less charitable, the company has spent a lot of money on its system, and it doesn’t hurt for the public to believe it’s a wonder-product.
Why is it that I so rarely hear the AI optimists call personification of AI out?
As they say in this article,
is no more capable of lying to you than your car is. It works as designed: they asked it for plausible sentences; it fed some back.
And yet companies like OpenAI routinely publish papers talking about AI "scheming" or doing lots of other human-like mental things. It seems pretty obvious that using such terms muddies the water and makes it more difficult to think about what is actually going on.
Like the author of this article, I can think of a number of reasons that AI companies might do this; what I don't understand is why the AI boosters who do like AI but who are also honest, don't call it out.
Maybe I'm just reading the wrong papers.