In February 2026, the US Department of Defense issues an ultimatum to AI company Anthropic. The demand: unrestricted access to Anthropic’s AI models – including applications that Anthropic’s own terms of service explicitly prohibit. The threat: a war-production law dating from 1950.

Seven months earlier, the same company had signed a $200 million contract with the same Department of Defense. Anthropic has made responsible AI development a founding principle – and was, at the same time, the first AI company cleared for deployment on classified military networks.

The conflict exposes mechanisms that reach far beyond a single company.

What this is not about

The question of data access is not at issue in this conflict. Since 2018, the US Department of Justice has had the CLOUD Act (Clarifying Lawful Overseas Use of Data Act), which compels US companies to hand over any data they store – regardless of where in the world that data physically resides. This applies to Microsoft, Google, Amazon, Meta, OpenAI, Anthropic and every other US technology company.

Whether Anthropic grants the Pentagon access to its AI technology or not: the US government has had, has, and will continue to have access to user data for as long as the provider is a US company.

What is at stake is something else entirely: unrestricted access to the technology itself – to use the AI models for mass surveillance and autonomous weapons systems.

A safety pledge and a military contract

Anthropic has positioned itself as a company that places responsible AI development at the centre of its mission. Its Acceptable Use Policy explicitly prohibits mass surveillance of populations and the deployment of autonomous weapons systems without human oversight.

At the same time, the company accepted a $200 million prototype contract with the Department of Defense in July 2025, developed dedicated Claude Gov Models for government clients, and – through partners such as Palantir – was already operating on classified military networks. At the time the conflict erupted, Anthropic was the only AI company cleared for use on classified systems.

  • Any company that enters into a contract with the Department of Defense places itself within its legal and operational sphere of influence.
  • Any company that operates on classified networks becomes critical infrastructure – and a potential target for regulatory compulsion.
  • Any company that accepts $200 million from the DoD creates an economic dependency that can be used as leverage.

Anthropic appears to have bet that its own terms of service would be respected as a contractual limitation. The DoD sees it differently. It regards the existing payments and deployments as grounds to treat Anthropic as part of the military supply chain and to override all restrictions – through contractual pressure, through a supply-chain risk designation, or through application of the Defense Production Act.

Three levels of escalation

Contract termination. The most obvious measure: cancellation of the $200 million contract. Financially manageable for Anthropic, which derives most of its revenue from the commercial market.

Supply-chain designation. Anthropic is declared a Supply Chain Risk. The consequence: every company that does business with the US military would have to remove Anthropic from its systems. Given that Anthropic is connected to AWS (Amazon), Palantir and numerous government partners, the economic damage would be substantial.

Defense Production Act. A war-production law from 1950 is invoked to compel an AI company to surrender its technology. The Pentagon sets the deadline for Friday, 5:01 p.m.

The second level is the most remarkable. Supply-chain risk designations have so far been reserved for foreign adversaries: Huawei (China), Kaspersky (Russia). Applying the instrument to a domestic US company would set a precedent.

Beyond one company

The Anthropic case demonstrates a mechanism that is relevant to every technology company headquartered in the United States.

With a government contract, a company can be classified as critical supply chain, compelled under the Defense Production Act to surrender its technology, and economically isolated through a supply-chain designation. This is not limited to AI. It potentially applies to any technology the Pentagon deems defence-relevant: cloud infrastructure, quantum computing, biotechnology, semiconductors, cryptography.

Without a government contract, the risk persists. The Defense Production Act allows the government to compel companies to provide goods it classifies as defence-relevant – even in the absence of any existing business relationship. Biden used the DPA in 2023 to require AI companies to conduct safety tests and share information. The current application would be a significant escalation, but the legal framework exists.

In both scenarios, the company loses control over how its technology is used. It remains private in name – but the decision on how that technology is deployed rests with the state.

Consequences for business location

For technology companies that operate globally and whose business model depends on trust, the United States is becoming an increasing liability risk.

European customers must justify, under the GDPR, why they trust a provider whose technology can be requisitioned by the US military at any time. Companies in regulated industries – healthcare, finance, critical infrastructure – face the question of whether a US AI provider can still be considered compliant under these circumstances. Foreign governments using US AI technology must now assume that the same technology is being used in parallel by the US military.

The message to the market is hard to refute: a US company’s terms of service are valid only for as long as the US government chooses not to override them.

At the same time, the case sends a signal to the industry itself. Anthropic was one of the few companies that treated AI safety not merely as marketing but as an operational constraint. OpenAI, Google and xAI have already agreed to make their AI available for all “lawful purposes”. Anthropic was the last major US provider to set limits on military use. If Anthropic, too, relents – or is compelled by law to do so – that last boundary falls. For the race on AI safety standards, this means less incentive to adopt restrictions that can be overridden regardless.

Geopolitical chain reactions

The case has implications that extend beyond the US technology sector.

Precedent. If the United States uses the Defense Production Act to force an AI company into military cooperation, other states will take note – and may cite it as justification for their own measures. China, Russia, but also India or Turkey could apply identical mechanisms to their domestic technology companies. The argument writes itself: even the United States does it.

Market fragmentation. The logical consequence is a partitioning of the global AI market along geopolitical lines: US AI for the American sphere of influence, Chinese AI (DeepSeek, Baidu) for the Chinese, European AI for regulatorily sovereign markets. Each region with its own standards, its own controls. This is the opposite of an open technology market – and raises costs for everyone.

Weaponisation of the supply-chain designation. The threat to declare a domestic company a Supply Chain Risk because it refuses to abandon its ethical standards renders the instrument arbitrarily deployable – against any company that resists government demands.

Investor risk. Anthropic is planning an IPO later this year, according to NPR. The conflict creates a new risk category: regulatory compulsion risk – not regulation in the conventional sense, but the possibility of having a business model rewritten by a wartime statute.

Location risk USA: a strategic assessment

From a strategic standpoint, the Anthropic case introduces a new calculus for any company developing proprietary technology of strategic value.

The instruments available to the US government are not limited to wartime or emergencies:

  • CLOUD Act – access to data
  • Defense Production Act – access to technology
  • Supply Chain Risk Designation – economic isolation in case of refusal

All three are being deployed or threatened here in the context of a contract dispute.

Risk comparison by jurisdiction

Headquartered in the USHeadquartered outside the US
Capital and ecosystemHighest global concentration of venture capital, talent and infrastructureLower capital access, but growing alternatives (EU, UK, Canada, Singapore)
Government accessFull legal access to data and technologyDependent on local jurisdiction
CompulsionCompulsion via Defense Production Act possibleNo comparable instrument in most jurisdictions
Economic riskEconomic isolation if non-cooperativePotential loss of US market access

The warning extends beyond AI companies. It applies to any technology the state might classify as defence-relevant: quantum computing, biotechnology, cryptography, semiconductors, space, cybersecurity, robotics.

The counterargument

Silicon Valley still offers the world’s highest concentration of venture capital, talent and infrastructure. No other location comes close. A mass exodus of technology companies is unlikely in the near term.

What is likely to change is the sophistication of the location decision. Companies whose business model depends on global trust – particularly vis-à-vis European, Asian or governmental customers – will have to weigh the US as a strategic risk factor. Not as a disqualifying criterion, but as a variable that did not previously exist in this form.

European and Asian alternatives are gaining relevance in this context. Mistral (France), Aleph Alpha (Germany), DeepSeek (China) – they become more attractive to customers that require regulatory independence from the United States.

The strategic response for affected companies is likely not avoidance but structuring: a holding company outside the US, intellectual property in a neutral jurisdiction, operational presence in the US for the US market. Much as it happened with cloud providers and data residency after the Snowden revelations.

Only this time, it is not about data. It is about the technology itself.