US defense officials bar contractors from the startup’s technology, citing disputes over surveillance, lethal weapons limits, and national security ethics concerns.
Early in March 2026, the Pentagon formally notified AI startup Anthropic that it is now classified as a “supply chain risk”, a rare designation that could sharply limit the firm’s role in US defense projects and wider federal contracting.
The move is an escalation of a months‑long clash over how far a private AI vendor can go in restricting military use of its models.
According to the notice, effective immediately, US government contractors are barred from using the firm’s technology in work for the Pentagon, although the exact scope and implementation details of the restrictions remain unclear. The step follows Defense Department frustration over Anthropic’s push to insert contractual limits on applications such as mass domestic surveillance and fully autonomous lethal weapons.
The designation lands as the firm’s systems have been used by the US military in sensitive operations, including tasks linked by one source to activities in Iran, heightening the stakes for both sides. By labeling the firm a supply chain risk, the Pentagon is signaling it views the firm’s contractual red lines as an unacceptable constraint on what it sees as lawful military uses of critical technology.
The move also sends a warning shot to the broader AI industry. Historically, similar supply‑chain risk labels have been used mainly against foreign suppliers tied to strategic rivals, not domestic startups that openly market to US institutions. Now, defense officials are testing whether that same tool can be used against an American AI vendor over disagreements on usage terms, setting up a potential court fight that could define how much leverage technology companies have to enforce their own safety and ethics policies when dealing with national security customers.
Anthropic has previously said it would challenge such a designation in court, and has argued that forcing firms to allow unconstrained use of AI in surveillance and weapons would have dangerous downstream consequences.
For now, the Pentagon’s decision injects fresh uncertainty into government access to Anthropic tools even as the military leans more heavily on advanced AI for data analysis, targeting support, and battlefield decision‑making.