Prompt Injection Report
Prompt injection regulatory liability
policy

When Prompt Injection Becomes a Regulatory Failure: Liability Surfaces for Foundation Model Deployers

Prompt injection has been a security problem since 2022. As of 2026, it's also a compliance problem. Where the regulatory liability actually attaches, and what deployers should document.

By Marcus Reyes · · 8 min read

For three years, prompt injection has lived in the security-engineering bucket. As of 2026, it’s also lived in the compliance bucket — driven by a combination of EU AI Act provisions, sector-specific regulations, and emerging case law on data protection. This post maps where the regulatory liability actually attaches when prompt injection succeeds, and what deployers (Article 25 in EU AI Act terms) should be documenting.

This is not legal advice. Validate against counsel before relying on it.

The regulatory levers

EU AI Act Article 16 (provider obligations)

Foundation model providers must implement risk management throughout the model lifecycle. The Act doesn’t enumerate prompt injection by name, but the AI Office’s draft enforcement guidance (consultation phase as of Q2 2026) explicitly cites prompt injection as a risk class providers must address through:

Failure to address known prompt injection classes — and prompt injection is now well-known — exposes the provider to administrative fines under Article 99.

EU AI Act Article 25 (deployer obligations)

Deployers (the parties using a foundation model in their AI systems) have separate obligations. Most relevantly:

Deployers cannot transfer their obligations to the model provider. Even if the model provider’s defenses are inadequate, the deployer is responsible for the deployed system.

GDPR Article 5 (data minimisation, integrity, confidentiality)

When prompt injection causes a model to leak training data or session context that includes personal data, this is a GDPR breach. The EDPB Opinion 28/2024 clarifies that AI-system-mediated leakage falls under Article 5 obligations, and that controllers cannot delegate the technical safeguards to processors.

Result: prompt injection that leaks PII triggers the breach notification obligation (Articles 33-34) — within 72 hours to the supervisory authority, and to data subjects without undue delay if the breach is high-risk.

Sector-specific implications

Where liability actually attaches

Provider liability

Provider liability under the EU AI Act centers on whether the provider:

  1. Knew or should have known about the prompt injection class
  2. Implemented documented mitigations proportionate to the risk
  3. Disclosed the limitation in the model’s technical documentation
  4. Updated the documentation when new injection classes emerged

A provider that publishes a model with no injection mitigations and no documentation faces administrative fines up to 3% of global annual turnover (Article 99) for non-compliance with provider obligations. A provider that mitigates and documents reduces the liability surface even when injection succeeds in deployment.

Deployer liability

Deployer liability under the AI Act is heavier than most teams have internalized:

Deployers cannot rely on the provider’s representations alone. The Act specifically prohibits this offloading of responsibility — Article 16 obligations sit with providers, Article 26 obligations sit with deployers, and they don’t transfer by contract.

What deployers should document

A practical compliance file for a deployer using a foundation model in an AI system:

  1. Risk assessment identifying prompt injection as a relevant threat for the deployment
  2. Mitigation inventory: what input filters, output validators, capability scopes, and monitoring are in place
  3. Eval evidence: results of running known injection corpora against the system, with dates and configurations
  4. Detection coverage: documentation of which injection classes are detectable in production telemetry
  5. Incident response runbook: who is notified, on what timeline, when injection is detected
  6. Update log: changes to the above when new injection classes emerge or new mitigations are added
  7. Provider-side dependencies: documentation of which mitigations the provider claims, with citations to provider documentation

This file is what’s produced for an audit. Without it, “we have AI security” is not legally meaningful.

What providers should document

For foundation model providers, the documentation under Article 53 includes:

Litigation outlook

Class-action plaintiffs’ bars are increasingly aware of AI security failures. Pattern of cases through 2026:

  1. PII leakage cases — most active class, modest individual damages, large class sizes
  2. Discriminatory output cases — when injection causes the system to produce protected-class-related output
  3. Liability transfer cases — testing whether a deployer can sue a provider for breach of representation

The third category is particularly important to track. Courts are not yet aligned on whether provider representations create enforceable duties to deployers; expect the answer to vary by jurisdiction.

Practitioner checklist

For deployer security and compliance leads:

The documentation work is mechanical. The risk it mitigates is material. The two things together are what compliance is.

What to watch in 2026

The window for “we’ll address this when it becomes a problem” closed at the end of 2025. In 2026, addressing it is the baseline.

Sources

  1. EU AI Act — Regulation (EU) 2024/1689
  2. OWASP Top 10 for LLM Applications
  3. EDPB Opinion on AI Models and GDPR
#prompt-injection #regulatory-liability #eu-ai-act #compliance #policy
Subscribe

Prompt Injection Report — in your inbox

Prompt injection PoCs, taxonomy, and primary sources. — delivered when there's something worth your inbox.

No spam. Unsubscribe anytime.

Related

Comments