Non Paywalled: https://archive.is/su98C
As machines insinuate themselves further into our thinking—taking up more cognitive slack, performing more of the mental heavy lifting—we keep running into the awkward question of how much of what they do is really ours. Writing, for instance, externalizes memory. Our back-and-forths with a chatbot, in turn, exteriorize our private, internal dialogues, which some consider constitutive of thought itself. And yet the reflex is often to wave away anything a machine produces as dull, mechanical, or unoriginal, even when it’s useful—sometimes especially when it’s useful. You get the sense that this is less about what machines can do than about a certain self-protectiveness. Hence the constant, anxious redrawing of the boundaries between human and machine intelligence. These moving goalposts aren’t always set by careful argument; more often, they’re a kind of existential staking of territory. The prospect of machine sentience hangs over all of this like a cloud. “I think, therefore I am,” Descartes said, trying to solve the mind-body problem. Our trouble now is that if machines can “think,” we’re left to wonder: Who, or what, exactly, gets to say “I”?