pull down to refresh

When Anthropic last year became the first major AI company cleared by the US government for classified use—including military applications—the news didn’t make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a “supply chain risk,” a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic’s AI in their defense work. In a statement to WIRED, chief Pentagon spokesperson Sean Parnell confirmed that Anthropic was in the hot seat. “Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” he said. This is a message to other companies as well: OpenAI, xAI and Google, which currently have Department of Defense contracts for unclassified work, are jumping through the requisite hoops to get their own high clearances.

There’s plenty to unpack here. For one thing, there’s a question of whether Anthropic is being punished for complaining about the fact that its AI model Claude was used as part of the raid to remove Venezuela's president Nicolás Maduro (that’s what’s being reported; the company denies it). There’s also the fact that Anthropic publicly supports AI regulation—an outlier stance in the industry and one that runs counter to the administration’s policies. But there’s a bigger, more disturbing issue at play. Will government demands for military use make AI itself less safe?

Researchers and executives believe AI is the most powerful technology ever invented. Virtually all of the current AI companies were founded on the premise that it is possible to achieve AGI, or superintelligence, in a way that prevents widespread harm. Elon Musk, the founder of xAI, was once the biggest proponent of reining in AI—he cofounded OpenAI because he feared that the technology was too dangerous to be left in the hands of profit-seeking companies.

...