Copilot Bypasses DLP, Leaks Emails
In January 2026, organisations using Microsoft 365 discovered that Copilot Chat was summarising emails marked as confidential — even when Data Loss Prevention (DLP) policies were explicitly configured to prevent it. The bug was reported by customers on 21 January. Microsoft acknowledged it in early February in a notice tracked as CW1226324.
What happened
Microsoft 365 uses sensitivity labels (e.g. “Confidential”, “Highly Confidential”) and DLP policies via Microsoft Purview to control how data flows within an organisation. The expectation: if an email is labelled confidential, AI tools should not process it.
The reality: a code defect caused Copilot Chat to pick up items from the Sent Items and Drafts folders regardless of their sensitivity labels. The AI summarised confidential emails on request, serving the content through the Copilot Chat “Work” tab.
Microsoft’s own documentation already states that sensitivity labels do not apply consistently across all Copilot surfaces — a caveat many administrators may not have noticed.
The sovereignty angle
This incident is not about a single bug. It illustrates a structural problem: organisations that outsource critical communication to a cloud AI platform cannot independently verify what that AI accesses. DLP policies are a contractual and technical promise — but when the AI is closed-source, running on someone else’s infrastructure, and updated at the vendor’s discretion, the organisation has no way to audit compliance in real time.
For European organisations already operating under CLOUD Act jurisdiction risk, this adds a second layer of concern: not just who can legally access your data, but which AI features are silently processing it.
Microsoft has since deployed a configuration update. But the incident ran for weeks before it was acknowledged — weeks during which confidential content was being processed by an AI model without authorisation.
What organisations can do
- Audit Copilot’s actual behaviour, not just your policy configuration. The two do not necessarily match.
- Test DLP policies against AI surfaces specifically — sensitivity labels that work in Outlook may not apply in Copilot Chat, Teams, or other connected experiences.
- Evaluate whether AI features should be enabled at all for sensitive communication categories. The default in Microsoft 365 is opt-in by deployment, not opt-in by the user.
- Consider the control asymmetry. In a self-hosted open-source stack, a bug like this can be found by reading the source code. In a proprietary SaaS platform, you learn about it when the vendor decides to tell you.
What Follows
This incident is a reminder: when you hand communication data to a closed-source AI, you are trusting that AI to respect your policies — but you cannot verify that it does. The bug existed for weeks before anyone noticed. Confidential data was processed without authorisation, and the only reason we know about it is because Microsoft chose to disclose it.
Organisations have three realistic responses:
This week (minimal effort):
- Disable Copilot for users handling sensitive data. Not all Microsoft 365 licences include Copilot — audit who has access and revoke it for roles handling confidential information.
- Enable Purview DLP for Copilot explicitly. Do not rely on the default configuration. Test it.
- Assume any email processed by Copilot may be accessed by Microsoft. If that is unacceptable, do not use Copilot for sensitive communication.
This month (infrastructure project):
- Deploy a self-hosted summarisation AI. A local LLM on internal infrastructure — using open-weight models like Mistral 7B or LLaMA 3.1 8B — runs on your servers, under your control. The AI processes your data, not Microsoft’s. Tools like vLLM and Ollama make deployment straightforward on a single GPU server.
This quarter (strategic decision):
- Reconsider the Microsoft 365 bundle. Copilot is sold as part of Microsoft 365, but the security trade-off is explicit: you are adding an AI layer that processes everything in your tenant. For organisations with genuine data sovereignty requirements, the alternative is not “no AI” — it is AI on your infrastructure, with open-weight models, under your jurisdiction.
The copilot incident will be forgotten next month. The next AI surprise will not. The organisations that prepare now will not have to explain a data breach later.