Anthropic CEO says AI company 'cannot in good conscience accede' to Pentagon's demands Feb. 27 09:01 am JST 12 Comments
By KONSTANTIN TOROPIN and MATT O'BRIEN WASHINGTON
Anthropic CEO Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow unrestricted use of its technology, deepening the unusually public clash with the Trump administration that is threatening to pull its contract and take other drastic steps by Friday.
The maker of the AI chatbot Claude said in a statement that it’s not walking away from negotiations, but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
Sean Parnell, the Pentagon’s top spokesman, said on social media Thursday that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new U.S. military internal network.
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei wrote in a statement. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
Defense Secretary Pete Hegseth gave Anthropic an ultimatum on Tuesday after meeting with Amodei: Open its artificial intelligence technology for unrestricted military use by Friday, or risk losing its government contract. Military officials warned that they could go even further and designate the company as a supply chain risk, or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products.
Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Parnell reiterated that the Pentagon wants to “ use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.
The talks that escalated this week began months ago. Amodei said that if the Pentagon doesn't reconsider its position, Anthropic “will work to enable a smooth transition to another provider.”
Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the Pentagon has been handling the matter unprofessionally while Anthropic is “trying to do their best to help us from ourselves.”
“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.”
He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the Pentagon is “working to bully a leading U.S. company.”
“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
While Pentagon officials say they always will follow the law with their use of AI models, the department has taken steps to change the culture among the military legal ranks.
Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.”
The same month, Hegseth also fired the top lawyers for the Army and the Air Force without explanation. The Navy’s top lawyer had resigned shortly after the election in late 2024.
Associated Press writer Ben Finley contributed to this report.ChristopherBlackwell
Thank you for posting a broader perspective to the issue than just a formal statement. :-)
Movements exist across many industries to incorporate AI into their production assembly, from robotics to article editing, proofreading, and even original content intellectual property creation.
Amazon intends to completely AI robotize their warehouses, eliminating countless thousands of real labor jobs in their facilities alone. My dottir's former employer delegated all their article editing work to AI, laying off ALL human proofreaders and editors. My son's sound engineering business has slowed to a crawl because AI can to it for pennies on the dollar.
AI costs almost nothing beyond up front licensing.
Rather than just cut these workers loose to fend how ever they can, PAY displaced workers a substantial percentage of the salary or wages they lost to AI implementation. Joe Smith loses his job to a robot? Then pay Joe Smith 88% of his salary, keeping maintenance costs for robot repair.
Do NOT allow ownership to further enrich themselves and pocket these "savings" to increase their wealth and profits.
If they're gonna implement AI, then do it at a cost that doesn't devastate the labor and intellectual property generation work forces.You can look away from a painting, but you can't listen away from a symphony
I feel ya.
Posted by jacque on February 27, 2026, 11:46 pm, in reply to "My thoughts on AI" Valued Poster
AI as a tool can be beneficial. But removing the human from the process right now is dangerous. It's moving too fast. And it is costing lives literally in many ways.
Greed drives it more than speed. A Jack of all trades is master of none, but oftentimes better than a master of one.
... And many thanks for the masters in skill for setting our standard of work to instill.
I agree... it is dangerous to remove human oversight.
Posted by Skye on February 28, 2026, 5:51 am, in reply to "I feel ya." Valued Poster
I've noticed how AI tries to anticipate what I want to write and comes up with a totally different meaning than what I wanted or intended. It's like auto spell check on steroids. NOT Good!
AI should be treated like cancer. Comes with a warning
It is replacement of the workforce that is most concerning. Corporate leadership looking to use AI to cut costs and boost profits is the problem, and it is for greed and nothing else.
AI cannot generate anything human beings can't. The corporatocracy clearly proves they're not ready to use it.
AI needs to come with disclaimers of AI production and generation.
AI videos are literally exploding on You Tube and other social media. Like felon fighting the Canadian hockey player. The most incapable people can use it, and they lack integrity to use it responsibly. You can look away from a painting, but you can't listen away from a symphony