Peggy Noonan’s Wall Street Journal column of February 13, 2026—built around a long essay by Dario Amodei, CEO of Anthropic—announces an “AI tsunami.” We are told that something vast, unstoppable, and civilization-altering is approaching. Noonan, with her gift for framing, captures the mood perfectly: alert, alarm, awe, dread. The story, she notes, has already rolled across “a thousand podcasts, posts and essays.” This one is meant to stand above the rest.
We will not understand AI—or respond rationally to it—until we stop describing it as though it were a rival species with a will.
Yet for anyone who has spent serious time thinking about AI—especially anyone who has spent serious time using it—there is an odd emptiness at the center of the warning. The column is full of language of agency: what “AI will do,” what it “wants,” how it will “think,” what it will “decide,” and how we will be unable to stop it. But much of that language is not analysis. It is metaphor. And metaphor becomes dangerous when it is treated as argument.
I am not arguing that AI is unimportant. But we will not understand AI—or respond rationally to it—until we stop describing it as though it were a rival species with a will.
- The Oldest Story: “A Mind That Wants Things”
- The Real Power of AI Is Not Volition
- Why CEOs Talk Like This
- What AI Actually Threatens
- “Every Human Cognitive Ability”?
- The Great Confusion: Intelligence vs. Consciousness
- The “Tsunami” May Be Political
- A More Rational Position
...read more at thesavvystreet.com
pull down to refresh
related posts