Technology as Leverage:
Europe’s Digital Dependency Under Scrutiny
Friday, 27 February 2026. Just before five.
In a negotiation room at the Pentagon, a compromise was on the table. AI company Anthropic had drawn two red lines: no autonomous weapons, no mass surveillance. The Pentagon had yielded on autonomous weapons — the phrase “as appropriate”, the loophole that would have permitted deployment of Anthropic’s AI model Claude in weapons systems, was to be removed. Anthropic had offered in return to work with the NSA on data collected under judicial oversight pursuant to the FISA Act.
Two red lines that both sides had been willing to discuss. An agreement seemed possible.
Then the Pentagon demanded something else: access to commercial bulk data of American citizens — chatbot queries, GPS locations, credit card transactions. Not data released by a court. Data that individuals and businesses entrust to Claude in the course of normal operations. Anthropic refused. At 5:01 p.m., the deadline expired.
Three hours later, US Secretary of War Pete Hegseth designated Anthropic a “supply chain risk” — a designation previously reserved exclusively for foreign actors. The next morning, OpenAI signed the replacement contract.
What happened that week is not a contractual dispute. It is the clearest case to date of the US government wielding technology as leverage — not against a geopolitical rival, but against one of its own companies. The instruments deployed here — sanctions law, supply chain designation, Defense Production Act — are available to the US government against any American technology company. Microsoft, Apple, Google, Amazon — including those whose software and services your employees launched this morning.
Whose data is it?
The negotiation week between Anthropic and the Pentagon reveals details that have only now come to light through reporting by The Atlantic and Golem.de.
The core of the failure was not about autonomous weapons — the Pentagon had made concessions there. It was about a question that concerns every business using a cloud service: who owns the data that customers entrust to a US provider?
The Pentagon demanded access to commercial data streams: not material that a court had released for national security purposes, but the entire data flow that Claude processes in commercial operation — queries from individuals and businesses, location data, financial transactions. The line between intelligence work and customer surveillance was to be erased.
Anthropic offered a compromise: cooperation with the NSA on FISA data — material collected under judicial authorisation with an intelligence purpose. The company refused to analyse commercial bulk data without judicial oversight.
A second point of contention was deployment architecture. Anthropic argued that even a purely cloud-based solution offered no guarantee against uncontrolled use. Modern military architectures operate with mesh networks — decentralised structures in which data is processed on devices in the field, not only in the cloud. A model that officially runs “only in the cloud” can, in these networks, effectively end up running on devices in the theatre of operations — without the provider being able to control or prevent it.
Emil Michael, who led the Pentagon side of the negotiations, publicly called Anthropic CEO Dario Amodei a “liar” with a “God complex”. Golem.de draws a historical parallel: Oppenheimer — the physicist who built the atomic bomb and then tried to prevent the hydrogen bomb. The US government revoked his security clearance. Not because he was wrong, but because he was inconvenient.
Notably: despite the supply chain risk designation, the US military continued using Claude for active military operations in the days that followed — invoking the six-month transition clause in the existing contract. The Pentagon simultaneously declared Anthropic’s AI a security risk and an indispensable tool.
Simultaneously a security risk and an indispensable tool — that was the negotiating position of the world’s most powerful military towards a company that refused to surveil its own customers.
The replacement deal — and its limits
One day after the Anthropic negotiations collapsed, OpenAI signed a contract with the Pentagon. CEO Sam Altman presented the deal as a responsible compromise — with three “red lines”:
- No mass surveillance of US citizens domestically
- No autonomous weapons — all lethal decisions made by humans
- No social credit system for US citizens
The phrasing sounds reassuring. The details less so.
OpenAI makes its models available to the Pentagon for “all lawful purposes”. The contract references current law — with the notable caveat that laws can change.
The cloud problem. OpenAI relies on purely cloud-based deployment, with the model remaining under OpenAI’s control. This is precisely the approach Anthropic rejected as insufficient — arguing that mesh networks render the distinction between cloud and endpoint technically meaningless in modern military architectures.
The law problem. “All lawful purposes” sounds like a clear boundary. But laws are written — and rewritten. Executive Order 14110, which established AI safety standards at the federal level, was revoked on the new administration’s first day in office. What is prohibited today may be permitted tomorrow. The contract binds OpenAI to the letter of the law, not to ethical principles.
The credibility problem. Altman initially signalled solidarity with Anthropic. He then signed the replacement contract. His own employees considered their company’s deal the wrong compromise: around 100 OpenAI staff signed an open letter that expressly supported Anthropic’s red lines.
OpenAI also asked the government to offer the same contractual terms to all AI labs and to resolve the dispute with Anthropic. Even the signatory of the replacement contract considered the treatment of Anthropic problematic — a remarkable act of distancing.
What happens when the ethical objections come not from the boardroom, but despite it?
The pattern: from Iran to Anthropic
The Anthropic case is not an isolated incident. It is the latest point in an escalation line stretching back decades.
Iran. An entire country — cut off from Google, Apple, cloud platforms, software updates. For over a decade. OFAC sanctions prohibit US companies from any business relationship. Not through bombs. Through licence revocation.
Russia. March 2022. Microsoft revokes Russian companies’ access to licences, cloud services and updates. Overnight. Without a transition period. Google, Apple, SAP and Oracle follow. The companies did not act voluntarily: US sanctions law left them no choice.
Huawei. 2019. The US government places the world’s largest 5G equipment manufacturer on the Entity List. No chips, no software, no licences. The justification: national security. The consequence: economic isolation.
The French judge. A European citizen on European soil. A judge at the International Criminal Court. He cannot book hotel rooms, rent a car, or shop online — because European payment transactions run through Visa and Mastercard and US sanctions law applies. (For an in-depth operational analysis of this case, see our digital risk audit.)
Anthropic. February 2026. For the first time: a US company is designated a supply chain risk. Not a foreign actor. An American firm that refused to abandon its own ethical principles.
The pattern is unmistakable. Each step widens the circle of those who can be targeted: countries classified as enemies. Countries in active conflicts. Foreign companies. Individuals on European soil. Domestic companies. What was unthinkable yesterday is today’s precedent.
Every licence is conditional
The software your employees launched this morning does not belong to you. It is borrowed — on the condition that Washington has no objection.
Every EULA that a European company signs with a US software provider contains a clause that is rarely read: the obligation to comply with US export control and sanctions law. This clause appears in the terms of service of Microsoft, Apple, Google, Amazon, Salesforce, Oracle and virtually every other US technology company.
What this clause means: the licence your company pays for — for Windows, Microsoft 365, AWS or Google Workspace — is valid subject to the condition that the US government does not take action to restrict or revoke access. This is not a theoretical scenario — it is precisely what happened in Iran and Russia.
The instruments available to the US government form an escalation ladder:
| Instrument | Access to | Deployed against |
|---|---|---|
| CLOUD Act | Data | All US technology companies |
| Sanctions law (OFAC) | Licences and services | Iran, Russia, Cuba, North Korea, individuals |
| Entity List | Technology exports | Huawei, Kaspersky, foreign companies |
| Defense Production Act | Technology itself | Anthropic (first use against a US company) |
| Supply chain designation | Economic isolation | Huawei, Kaspersky, Anthropic |
For European businesses, this means: the terms of service you agreed to with your US provider apply precisely as long as the US government does not override them. That this lever is not merely theoretical was demonstrated by Broadcom after the VMware acquisition — even without sanctions: tenfold price increases, elimination of perpetual licences, forced migration to subscriptions. Vendor lock-in makes it possible.
(For the legal analysis of each of these instruments, see our article Pentagon vs. Anthropic.)
Europe’s response
On 2 March 2026, SPD digital policy spokesperson Matthias Mieves wrote letters to EU Commission President Ursula von der Leyen, CDU leader Friedrich Merz and other decision-makers in Berlin and Brussels. His message: Europe should actively invite Anthropic to continue its AI development under European law (ZEIT / AFP / Reuters, 2 March 2026).
Mieves describes the pressure on Anthropic as “existentially threatening” and argues that the EU, under the EU AI Act, offers “optimal conditions” for human-centred AI development. It is the first time the EU AI Act has been explicitly framed as a location advantage for a specific company — not as a regulatory burden, but as a safe harbour.
The proposal merits a sober assessment. What speaks in favour:
- Europe would gain access to one of the most capable AI models in the world — developed under European law, operated on European infrastructure.
- The EU AI Act would function as precisely what it was designed to be: a framework that enables responsible AI development rather than preventing it.
- An Anthropic presence in Europe would signal to the global tech industry that European regulation is not synonymous with hostility to innovation.
What speaks against:
- Anthropic is deeply embedded in the US ecosystem: lead investor Amazon ($8 billion), infrastructure on AWS, talent pool in San Francisco.
- A relocation would not remove the company from the reach of the US market — and therefore not from US law.
- The EU currently has neither the compute infrastructure nor the venture capital to support an AI company of this scale on its own.
What Mieves’ initiative correctly identifies: the Anthropic case is a moment in which Europe has a strategic choice. Not the choice to solve every problem — but the choice to make an offer. Letting this opportunity pass unused would be the real risk. (For the analysis of why Anthropic’s safety standards came under pressure and how the EU AI Act differs from the US approach, see When Safety Becomes Negotiable.)
What this means for you
For individuals: The technologies you use every day — your operating system, your email, your cloud storage, your payment methods — are provided by US companies whose room for manoeuvre is subject to government intervention. In theory, this has always been the case. What has changed is the willingness to actually carry out such interventions — and the speed at which new precedents are being set. From the sanctioning of entire countries to the isolation of a Chinese conglomerate to the designation of a domestic company as a security risk: the circle of what is conceivable has expanded fundamentally in less than five years.
For businesses: Across Europe, companies are hiring Microsoft administrators right now. IT departments are planning migrations to the next Windows version. Apple devices are being ordered for the next fleet cycle. This is denial of reality — like renovating a house without checking who owns the ground it stands on. Not because these products are bad — but because companies are recruiting staff exclusively for systems that may not be available in an emergency, instead of training at least part of their workforce for the strategic shift that, in a crisis, makes the difference between a business interruption and total economic loss.
The Anthropic case adds a new variable to the risk equation. Until now, the calculus was: US providers are subject to the CLOUD Act (data access) and sanctions law (licence revocation on political grounds). Now an additional dimension has emerged: the Defense Production Act can compel US companies to provide their technology against their own will — and a supply chain designation can economically isolate companies that refuse.
For a complete migration to digital independence, the timeline may already be too late. What is not too late: having a plan. Training your administrators for an emergency. Having a solution ready in the drawer. Our digital risk audit provides a structured entry point for this preparation.
For the debate: Europe recognises the danger from one side — a Russian war industry that has gained operational experience from three years of war in Ukraine, experience no EU member state is prepared for: a thousand drone strikes on civilian infrastructure per night, drone-based warfare at a scale where ten Ukrainian pilots disabled two entire NATO battalions in a single exercise. Europe talks tough. It is powerless. What it entirely ignores is the danger from the other side. Not from the east. From across the Atlantic. Not in the form of tanks. In the form of licence terms.
Europe is not the beloved child leaving the family home. It is Cinderella. And there is no prince in sight.
The father in this story is unpredictable — a man who does not know himself what he will do next. A 36-page executive order is revoked on the first day in office. Sanctions are imposed without consultation of trading partners. An AI company is declared a supply chain risk while its model is simultaneously used for active military operations.
The Anthropic case demonstrates that this danger and unpredictability is visibly escalating. The window in which the decision is still a strategic choice rather than a reaction to a crisis is getting smaller.
If not now, when?
Sources
- Inside Anthropic’s Killer-Robot Dispute With the Pentagon (The Atlantic, March 2026)
- Anthropic vs. Pentagon: Der Oppenheimer-Moment (Golem.de, March 2026)
- Statement on Comments from Secretary of War (Anthropic, Feb. 2026)
- Our Agreement with the Department of War (OpenAI, Feb. 2026)
- Suspending new sales in Russia (Microsoft, March 2022)
- Euro cloud body says Broadcom licensing unfair (The Register, May 2025)
- NATO exercise reveals weaknesses in drone defence (Der Spiegel, Feb. 2026)
- SPD-Digitalexperte will Anthropic nach Europa holen (ZEIT / AFP / Reuters, March 2026)
- Pentagon vs. Anthropic: A Strategic Analysis (digital-independence.org, Feb. 2026)
- When Safety Becomes Negotiable (digital-independence.org, Feb. 2026)
- Digital Risk Audit (digital-independence.org, Feb. 2026)
Topic overview: All articles on digital-independence.org →