The question "Can machines think?" is not new. We see this more often with the latest advancements in generative AI. For the record it all depends on how you define the word. But is it even a helpful question? Not really.
We've had calculators for a very long time and no one today thinks they can think. They can make calculations that humans can make, but faster. Computers themselves were designed to compute, this was a job that humans did before computers existed.
Noted computer Turing Award winner Edsger W. Dijkstra once wrote
Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim.
So what? Focusing on if what submarines do being defined as swimming like a fish isn't that helpful. What a submarine can do and allow humans to do is far more useful. When you look at a sub and a fish they do many of the same things but in very different ways. For as long as we have had machines we have designed them to make our lives easier. They are able to out perform humans in pretty much everything. But they go about doing these things in different ways.
What does AI allow us to do? How can you use it effectively? Where are the gaps and weak-points. How will it impact humanity and economies? These are all useful questions. Whether it is "self-aware" or "thinking" has little value. When you dive into how these generative AIs are designed most of the problems and hype around them becoming aware seem silly and boil down vary broad definitions. They are machines. We are no where near creating a human mind. I would argue we don't really understand our minds.