Interesting and early. Worth a spike or exploration session.
Prompt Injection Defense
Prompt injection remains the #1 LLM security risk with 73% of deployments affected, but only 35% have dedicated defenses — the gap is a liability.
Agentic·Infrastructure
owasp.orgOur Take
What It Is
Prompt injection is the attack class where malicious inputs — embedded in documents, web pages, emails, or user messages — manipulate an AI model into performing unintended actions. Defence techniques span input sanitisation, output filtering, system prompt hardening, instruction hierarchy, and dedicated detection models. It remains the #1 vulnerability in OWASP's Top 10 for LLM Applications.
Why It Matters
Prompt injection stays in Emerging, but the urgency has increased significantly. Three data points tell the story: 73% of production AI deployments are affected by prompt injection vulnerabilities, only 34.7% have dedicated defences, and recent CVEs are hitting severity scores of 9.3-9.8 across tools like Copilot, GitHub Copilot, and Cursor. These aren't theoretical risks — they're published vulnerabilities in tools developers use daily.
For teams deploying agentic systems (which are growing rapidly), the attack surface multiplies. An agent with tool access that's vulnerable to injection can execute commands, exfiltrate data, or modify files. The defence gap between what's possible and what's deployed is a liability for any organisation running AI in production.
Key Developments
- Mar 2026: CVE disclosures hit Copilot (9.3), GitHub Copilot (9.6), and Cursor (9.8) — high-severity injection vulnerabilities in mainstream tools.
- Feb 2026: Research shows only 34.7% of production AI deployments have dedicated prompt injection defences.
- Jan 2026: Prompt injection defence market growing at 31.5% CAGR, reflecting increased enterprise demand.
- Dec 2025: OWASP reconfirms prompt injection as #1 LLM vulnerability for the third consecutive year.
What to Watch
The EU AI Act compliance deadline in August 2026 will force organisations to demonstrate security measures for high-risk AI systems. Watch for whether mainstream AI providers build injection defence into their APIs (rather than leaving it to application developers). The emergence of dedicated defence products (beyond research papers) would signal readiness for a move to Promising.
Strengths
- Awareness growing: OWASP, NIST, and EU AI Act are driving organisational attention to injection defence as a compliance requirement.
- Research velocity: Academic and industry research on detection and mitigation techniques is accelerating rapidly.
- Market demand: 31.5% CAGR in the defence market reflects genuine willingness to pay for solutions.
Considerations
- No silver bullet: No single technique eliminates prompt injection risk. Defence requires layered approaches across the full stack.
- Cat-and-mouse dynamics: As defences improve, attack techniques evolve. This is an ongoing arms race, not a solvable problem.
- Performance trade-offs: Input filtering and output validation add latency to every model call. At scale, this impacts user experience.
- False positives: Aggressive filtering can block legitimate inputs, degrading application functionality.
Resources
Documentation