Trump directed all US federal agencies to stop using Anthropic’s AI tools and gave them six months to phase out the technology, citing security concerns.
The conflict began after Anthropic refused to allow its AI to be used for mass surveillance or autonomous weapons, saying such use violates its safety principles.
Anthropic plans to challenge its “supply chain risk” label in court, while OpenAI reached a separate agreement with the Pentagon under similar safety conditions.
On February 28, 2026, US President Donald Trump ordered all federal agencies to immediately stop using artificial intelligence tools developed by Anthropic, the company behind the chatbot Claude. He gave departments that currently depend on the technology six months to phase it out. In a post on Truth Social, Trump strongly criticised the company, calling its leaders “radical left” and “woke,” and accused Anthropic of trying to control how the US military operates.
Trump wrote, “That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.” He claimed that Anthropic’s refusal to relax its rules was putting American lives and national security at risk. He directed every federal agency in the US government to cease all use. “We don’t need it, we don’t want it, and will not do business with them again!” he said.
He gave agencies six months and warned Anthropic to be helpful during this period. He cautioned that the company could face civil and criminal action if it did not cooperate and declared that the government would no longer do business with it.
See Also: Bill Gates Will Not Deliver Keynote Address at 2026 AI Impact Summit, Writes Gates Foundation
Defence Secretary Pete Hegseth went further by labelling Anthropic a “supply chain risk to national security.” This move could block Anthropic from working with the US military and defence contractors. Hegseth also ordered all military partners to stop commercial dealings with the company. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” he wrote on X. He ended his post by saying, “America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.”
The conflict began after Anthropic refused to give the Pentagon unrestricted access to its AI systems. In a statement, the company said, “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” It said its safety rules are necessary to prevent its technology from being used for mass surveillance of American citizens or in fully autonomous weapons. These were the two areas that were never included in its contract with the Department of War. CEO Dario Amodei said Anthropic “cannot in good conscience” agree to such demands.
Despite the dispute, Anthropic’s tools have been used by US agencies since 2024 for sensitive military planning and intelligence work. In July 2025, the company signed a $200 million contract with the Pentagon. Its technology helps officers analyse large amounts of classified data quickly and efficiently.
Meanwhile, on the same day as Trump’s announcement, OpenAI said it had signed a separate deal with the Pentagon. OpenAI CEO Sam Altman wrote on X on February 28, 2026, that “we reached an agreement with the Department of War to deploy our models in their classified network.” The agreement includes similar safety rules, such as bans on mass surveillance and requirements for human control over weapons. He said these principles are already part of US law and policy.
Altman also said OpenAI will deploy engineers to the Pentagon to ensure its AI systems are used safely. He added that the government should offer the same fair terms to all AI companies and called for cooperation instead of legal conflict.
Anthropic has said it will challenge the “supply chain risk” label in court. The company argues that the decision is “legally unsound” and sets a dangerous example for other American businesses. It stressed that it will not change its stance on protecting citizens’ privacy and preventing autonomous weapons.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” Anthropic said in its statement.
Suggested Reading: