Anthropic filed a federal lawsuit against the U.S. government on March 9, 2026, in the U.S. District Court for the Northern District of California. The company challenged the Department of Defense’s designation of Anthropic as a supply chain risk to national security. This designation placed Anthropic on what functions as a national security blacklist, barring it from Pentagon contracts and requiring defense contractors to certify they do not use Anthropic’s AI models, including Claude.
- Key point: First time a U.S.-based company received a label usually reserved for foreign adversaries.
The lawsuit named the Department of Defense, Defense Secretary Pete Hegseth, the Executive Office of the President, and over a dozen other federal agencies and officials as defendants. Anthropic filed a second related lawsuit in the U.S. Court of Appeals for the Washington, D.C. Circuit, addressing different aspects of the government’s actions.
The conflict originated from a dispute over usage restrictions on Anthropic’s Claude AI model. Anthropic maintained strict guardrails that prohibited Claude’s use for mass domestic surveillance of American citizens and for fully autonomous weapons systems. The company viewed these restrictions as necessary to prevent violations of fundamental rights and to avoid unreliable AI deployment in life-or-death military decisions.
- Bullet points:
- Prohibited use in mass domestic surveillance
- Prohibited use in autonomous weapons systems
- Ensured protection of fundamental rights
The Department of Defense, under the Trump administration, demanded removal of these restrictions to allow unrestricted use in national security operations. When Anthropic refused, the government escalated the matter.
President Donald Trump directed federal agencies to immediately cease using Anthropic technology in late February 2026. Defense Secretary Pete Hegseth followed by issuing the supply chain risk designation on March 4, 2026, via formal letter to Anthropic.
- Quote:
“This marked the first time a U.S.-based company received such a label, which the statute 10 U.S.C. § 3252 typically reserves for foreign adversaries posing risks of sabotage or subversion in defense supply chains.“
Anthropic argued in its complaint that the designation exceeded the legal scope of the supply chain risk statute. The law focuses on reducing risks from adversaries who could introduce malicious functions into covered systems. Anthropic asserted that a contractual disagreement over usage terms does not meet this definition. The company claimed the government misused the authority to punish Anthropic for its protected speech and viewpoints on AI safety.
The lawsuit alleged violations of the First Amendment. Anthropic stated that the government’s actions constituted retaliation against the company’s expressive activities, including its public positions on AI limitations and its refusal to alter terms of use.
- Quote:
“The complaint described the designation as an unlawful campaign of retaliation aimed at coercing compliance with the administration’s demands on AI deployment.”
Anthropic also claimed violations of Fifth Amendment due process rights. The company argued that the government blacklisted it without adequate notice, a meaningful opportunity to respond, or adherence to required legal protocols. No formal hearing or evidence-based process preceded the designation, despite the severe economic consequences.
The designation caused immediate harm. Anthropic previously held a $200 million Pentagon contract and deployed Claude on classified government networks, including at national laboratories and for intelligence analysis, modeling, simulation, operational planning, and cyber operations.
- Bullet points:
- Among the first frontier AI developers in U.S. national security systems
- Federal agencies and contractors now face restrictions on using Claude
- Blacklist could reduce 2026 revenue by multiple billions
Anthropic CEO Dario Amodei addressed the situation in a public statement prior to the filing. He confirmed receipt of the March 4 letter and declared the action legally unsound.
- Quote:
“The statute requires the least restrictive means to protect the supply chain and that the designation’s narrow scope, as written, does not broadly prohibit all business with Anthropic outside specific Department of Defense contracts.“
The dispute highlighted tensions between AI safety priorities and military operational needs. Claude’s deployment supported mission-critical applications until the restrictions became the sticking point. The government’s push for unrestricted access clashed with Anthropic’s red lines on surveillance and autonomous lethality.
Observers noted the unprecedented nature of applying a foreign-adversary tool against a domestic company. The supply chain risk framework under DFARS Subpart 239.73 targets threats from entities like Chinese military-linked firms. Using it here raised questions about overreach and potential abuse of executive power.
Anthropic sought injunctive relief in the lawsuits. The company requested that the court reverse the supply chain risk designation, block enforcement by federal agencies, and halt any further retaliatory actions. The filings stressed that the government’s moves threatened the economic value of one of the world’s fastest-growing private AI companies.
This federal court challenge directly confronts potential abuse of national security authorities to suppress dissenting corporate positions on critical technology governance. Anthropic’s stand protects constitutional rights against government overreach in the rapidly evolving AI domain.

