AI in Europe: Open vs. US Monopoly
AI is rapidly becoming critical infrastructure — for code generation, document processing, customer interaction, decision support. Most organisations today rely on US providers for these capabilities: OpenAI, Google, Anthropic. These providers lead for good reasons — mature APIs, frontier model performance, massive investment in safety research. But the concentration carries familiar risks: jurisdictional exposure, vendor lock-in, and the assumption that the current balance of power is permanent.
Then came the event that challenged that assumption.
On 27 January 2025, the Chinese AI lab DeepSeek released its R1 model — a reasoning model that matched or exceeded OpenAI’s o1 on most benchmarks. The disclosed training cost: $5.6 million. That same day, Nvidia lost $589 billion in market capitalisation — the largest single-day loss in stock market history.
The DeepSeek moment was not about China. It was about the collapse of a narrative: that building frontier AI requires billions of dollars, tens of thousands of GPUs, and — by implication — the resources of a Silicon Valley giant. If a Chinese lab could match OpenAI’s best model at a fraction of the cost, the entry barrier to competitive AI was far lower than the industry had claimed.
For organisations evaluating AI deployment options, this was significant: it validated the premise that competitive AI doesn’t require hyperscaler-scale resources — lowering the barrier for sovereign deployment on modest infrastructure.
The European AI Landscape: A Taxonomy
Understanding Europe’s position requires distinguishing between three very different things: companies building foundation models, platforms enabling AI deployment, and the regulatory framework governing both.
Mistral AI: The European Contender
Mistral AI is the closest thing Europe has to a frontier AI company. Founded in Paris in 2023 by former Google DeepMind and Meta researchers, Mistral has executed a remarkable ascent: from founding to €11.7 billion valuation and over $400 million in annual recurring revenue in under three years. The September 2025 funding round raised €1.7 billion, bringing the total valuation to €11.7 billion. Over 100 enterprise customers include government agencies.
Mistral’s strategic brilliance lies in its dual-track model: release competitive open-weight models (Mistral 7B, Mixtral 8x7B) that build community and mindshare, while selling premium proprietary models (Mistral Large) and enterprise services. The open-weight models are good enough to run on private infrastructure — which matters for organisations that can’t send sensitive data to a US cloud API.
Mistral Large 3, released in December 2025, uses 41 billion active parameters (675 billion total in a mixture-of-experts architecture). The model is competitive with GPT-4-class models on most benchmarks. For European organisations, the appeal is straightforward: a competitive LLM from a French company, deployable on European infrastructure, developed under European jurisdiction.
The caveat: Mistral is a venture-funded startup. Its investors include General Catalyst, Andreessen Horowitz, and Lightspeed — US venture capital firms. Mistral’s research leadership is European, its headquarters is in Paris, and its largest team is French. But the capital structure is transatlantic. This is the reality of European tech: the founders are European, the capital is often American, and “sovereignty” gets complicated when you follow the money.
Aleph Alpha: The Cautionary Tale
If Mistral is the success story, Aleph Alpha is the cautionary tale — though the narrative is more nuanced than the headlines suggest.
Founded in Heidelberg in 2019, Aleph Alpha raised approximately $533 million (though only about €110 million was equity — the rest was debt and service contracts). The company set out to build sovereign European foundation models. Its Luminous model family was positioned as the European alternative to GPT — developed in Germany, trained on European data, deployable on sovereign infrastructure.
In September 2024, Aleph Alpha pivoted. The company stopped developing its own foundation models and repositioned as an enterprise AI infrastructure provider under the brand PhariaAI. The pitch shifted from “we build European LLMs” to “we help enterprises deploy any LLM securely.”
The pivot was rational. Training frontier models costs hundreds of millions of dollars per generation. Each new model from OpenAI, Google, or Meta raised the bar. Aleph Alpha could not match the training budgets of companies backed by Microsoft ($13 billion into OpenAI) or Alphabet (effectively unlimited compute). Rather than burn through capital in a losing race, the company chose a defensible market: the “how” of enterprise AI deployment rather than the “what” of model development.
The lesson for European AI sovereignty is uncomfortable: foundation model development may be too capital-intensive for all but the best-funded European companies. Mistral has managed it — so far. Whether any other European company can sustain the investment is an open question.
Hugging Face: European Roots, American Address
Hugging Face deserves mention as the most important AI infrastructure company most people have never heard of. The platform hosts over 2 million models and 500,000+ datasets, making it the de facto hub for open-source and open-weight AI.
Hugging Face was founded in Paris by French entrepreneurs Clément Delangue, Julien Chaumond, and Thomas Wolf. Its largest engineering team — roughly 70 of the original core — remains in Paris. But the company is headquartered in New York, its valuation is driven by US investors, and its legal domicile is American.
This is the brain drain in action: European founders, European talent, European innovation — but US incorporation because that’s where the capital, the customers, and the legal environment are most favourable. From a dependency perspective, Hugging Face’s US incorporation means the platform operates under US jurisdiction — which matters for organisations evaluating their AI supply chain. The platform’s open-source nature mitigates this: the code and models can be self-hosted regardless of the company’s domicile.
Open Weight: The Sovereignty Enabler
The rise of open-weight models has fundamentally changed the sovereignty equation for AI.
Before LLaMA (Meta, February 2023), using a competitive LLM meant sending your data to OpenAI’s API — hosted on Microsoft Azure, under US jurisdiction. There was no alternative. The model weights were proprietary, the inference happened on the provider’s infrastructure, and every prompt you sent became training data (until opt-out policies were grudgingly introduced).
Open-weight models changed this. When model weights are publicly available, organisations can:
- Run models on their own infrastructure — in a European data centre, on a European cloud provider, or on-premises. No data leaves the organisation.
- Fine-tune models for specific use cases — legal analysis, medical documentation, government correspondence — using proprietary data that never touches a US server.
- Choose their deployment — cloud API for convenience, self-hosted for sovereignty, or hybrid for pragmatism.
The competitive open-weight landscape in early 2026 is remarkably rich:
- Meta LLaMA 3.1/3.2: The most widely adopted open-weight family. Licence permits commercial use but restricts it for applications with more than 700 million monthly active users. Not open source by the OSI definition, but permissive enough for most organisations.
- Mistral models: Several under Apache 2.0 (genuinely open source), others with custom licences. European origin.
- DeepSeek R1: MIT licence — the most permissive of any frontier model. Chinese origin, which raises different sovereignty questions.
- Qwen (Alibaba): Apache 2.0 licence. Chinese origin.
The irony is thick: the most permissive licensing for frontier AI models comes not from Europe or the US, but from Chinese companies. DeepSeek R1 under MIT licence gives any European organisation the legal right to deploy, modify, and commercialise a frontier reasoning model. European AI sovereignty, in 2026, is partly enabled by Chinese openness.
The Compute Question
Running open-weight models locally requires hardware — specifically, GPUs with large memory. A Mistral 7B model can run on a single consumer GPU. A 70B model requires multiple high-end GPUs. Frontier models in the 400B+ parameter range require GPU clusters that cost hundreds of thousands of euros.
European GPU availability has improved but remains constrained. The relevant comparison:
- Scaleway (France) offers NVIDIA H100 instances from approximately €2.73 per hour
- Hetzner (Germany) has expanded its GPU offerings, though availability is limited
- OVHcloud (France) offers dedicated GPU servers
- US hyperscalers (AWS, Azure, GCP) have the broadest GPU availability but at higher prices and under US jurisdiction
For organisations deploying models up to 70B parameters — which covers the vast majority of practical use cases — European compute is sufficient and often cheaper. For training new frontier models or deploying the largest models at scale, the US cloud still has an infrastructure advantage.
The more interesting development is the falling hardware cost. Quantisation techniques (reducing model precision from 16-bit to 8-bit or 4-bit) have made it possible to run models that previously required expensive GPU clusters on significantly cheaper hardware — sometimes even on desktop machines. A quantised Mistral 7B runs comfortably on a laptop with a decent GPU. This democratisation of inference is as important for sovereignty as the open-weight movement itself.
The EU AI Act: Regulation as Double-Edged Sword
The EU AI Act (EU 2024/1689), which entered into force on 1 August 2024, is the world’s first comprehensive AI regulation. Its phased implementation runs through August 2027.
The Act classifies AI systems by risk and imposes corresponding obligations. For general-purpose AI (GPAI) — which includes all major LLMs — providers must provide technical documentation and comply with copyright law. Models with “systemic risk” (currently defined as training compute exceeding 10²⁵ FLOPs) face additional requirements: adversarial testing, incident reporting, and model evaluation.
Open-weight models receive some exemptions: providers of open-weight GPAI models are not required to publish technical documentation or comply with copyright transparency requirements — unless their model is classified as systemic risk. This is a deliberate incentive for openness.
The regulatory burden concern is real. Compliance with the AI Act — particularly for high-risk applications — requires legal expertise, documentation, conformity assessments, and ongoing monitoring. For a startup like Mistral, these costs are manageable. For smaller European AI companies, they may be prohibitive. The risk is that regulation intended to ensure safety and transparency instead consolidates the market around well-funded players who can afford compliance departments.
The counterargument: without regulation, the AI market would be even more dominated by US companies that can outspend European competitors. The AI Act creates rules that apply equally to all providers operating in Europe — including OpenAI, Google, and Anthropic. Whether this levels the playing field or adds friction equally is the central policy debate.
That the Berlin summit on digital sovereignty in November 2025 itself called for a 12-month postponement of the AI Act’s high-risk provisions is a quiet admission that the balance between regulation and innovation hasn’t been found yet.
Building a Sovereign AI Stack
For organisations seeking to reduce their AI dependency on US providers, a practical sovereign AI stack in 2026 looks like this:
Model layer: Open-weight models — Mistral (European, Apache 2.0 for some models), LLaMA (US, permissive licence), DeepSeek R1 (Chinese, MIT licence). Choose based on use case, performance requirements, and sovereignty preferences.
Inference layer: Self-hosted on European cloud infrastructure — Scaleway, Hetzner, OVHcloud, or SCS-certified providers. Frameworks like vLLM, llama.cpp, or Hugging Face’s Text Generation Inference (TGI) for serving.
Fine-tuning layer: Domain-specific adaptation using private data on local hardware. Tools like Axolotl, LoRA adapters, or Hugging Face’s PEFT library make this accessible without requiring frontier-scale resources.
Application layer: Integration into organisational workflows via standard APIs. The OpenAI API format has become a de facto standard — most open-weight inference servers implement it, enabling drop-in replacement of OpenAI with self-hosted alternatives.
Governance layer: The EU AI Act requirements for documentation, risk assessment, and transparency — which apply regardless of whether the model is proprietary or open-weight, self-hosted or cloud-based.
This stack is real. It works. Organisations across Europe are deploying it — quietly, without press releases, because the tools are mature enough that it’s become routine engineering rather than a research project.
What Europe Has, What It Lacks
Europe has:
- One genuinely competitive foundation model company (Mistral)
- The world’s largest open-source AI platform (Hugging Face — with caveats about its US domicile)
- A comprehensive regulatory framework (AI Act)
- Sufficient GPU compute for most practical deployments
- Regulatory momentum (AI Act, Data Act, EP report, Berlin summit commitments)
Europe lacks:
- Training-scale compute for frontier models (the gap is narrowing but real)
- A second foundation model company to ensure competition if Mistral falters
- Venture capital at US scale — European AI funding is growing but remains a fraction of US levels
- A track record of translating regulatory frameworks into actual procurement mandates
- A coherent strategy for AI talent retention — the brain drain continues
What Follows
The AI sovereignty question is different from cloud or office sovereignty. Cloud infrastructure can be replicated — the technology is well understood, and European providers have the expertise. Office software can be rebuilt — LibreOffice, Nextcloud, and Matrix prove it. But frontier AI development requires a concentration of capital, talent, and compute that Europe has struggled to assemble.
The open-weight movement has changed the equation. Europe doesn’t need to train frontier models to benefit from them. DeepSeek R1, LLaMA, and Mistral’s open models provide a foundation that European organisations can deploy, fine-tune, and build upon — on European infrastructure, under European law.
The pragmatic path is not “Europe must build its own GPT” — that ship may have sailed. The pragmatic path is: run the best available models on infrastructure under your operational control, with sensitive data remaining under a jurisdiction you can enforce, governed by a regulatory framework that ensures transparency and accountability.
That is achievable. It is, in many organisations, already happening. The sovereign AI stack described above is not theoretical — here is how to start building it, ordered by effort:
This week (minimal effort, immediate sovereignty gain):
- Stop sending sensitive data to US-hosted AI APIs. Every document you send to OpenAI’s or Anthropic’s API is processed under US jurisdiction. For legal documents, health data, or HR decisions, this is a compliance risk under both GDPR and the AI Act. Identify which workflows currently use external AI and assess the data sensitivity.
This month (a weekend project for your infrastructure team):
- Deploy a local LLM. A Mistral 7B or LLaMA 3.1 8B instance on a Hetzner GPU server (around €150/month) handles summarisation, classification, and code assistance for a team of 20–50 people. Tools like vLLM and llama.cpp make deployment straightforward. This is not research — it is infrastructure.
This quarter (project-level effort):
- Fine-tune a model on your domain data. Generic LLMs are useful; domain-adapted models are transformative. Tools like Axolotl and LoRA adapters make fine-tuning accessible on a single GPU. European providers (Hetzner, Scaleway, Lambda Cloud) offer the compute.
Before August 2027 (AI Act high-risk deadline):
- If you deploy AI in high-risk domains (HR screening, healthcare triage, law enforcement, credit scoring), the AI Act’s conformity requirements take full effect in August 2027. Start your risk classification and compliance assessment now — the organisations that wait until 2027 will face the same rushed implementation that GDPR laggards experienced in 2018.
Sovereign AI is no longer a research agenda — it is an infrastructure decision. The models exist. The deployment tools are mature. The European compute providers are ready. What remains is execution: choosing to run your own stack instead of renting someone else’s, and starting before the regulatory deadlines make the choice for you.
Sources
- DeepSeek R1 training cost and market impact (Bloomberg, January 2025) (paywall)
- Nvidia market cap loss (Reuters, 27 January 2025) (paywall)
- Mistral AI — news and funding history (Mistral AI blog)
- Aleph Alpha pivots to PhariaAI (TechCrunch, 2024)
- Aleph Alpha launches PhariaAI (official)
- Hugging Face model hub (huggingface.co)
- DeepSeek R1 MIT licence (GitHub)
- Meta LLaMA licence — not open source (OSI)
- EU AI Act full text (EUR-Lex)
- OSI Open Source AI Definition v1.0 (October 2024)
- Berlin summit — AI Act postponement (Élysée, 2025)
Topic overview: AI & Machine Learning