WatchRegulation & PracticeNo changeMarch 2026

Early or unproven. Follow the space but don't commit yet.

EU AI Act

The EU AI Act August 2026 deadline is approaching — if you're deploying AI in Europe, start your conformity assessment now.

Our Take

What It Is

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and sets requirements proportional to the risk. High-risk AI systems face the most stringent requirements: conformity assessments, technical documentation, human oversight, accuracy and robustness standards, and registration in the EU AI database.

Why It Matters

The EU AI Act stays in Watch, but the August 2, 2026 enforcement date for high-risk AI system rules makes this increasingly time-sensitive. Organisations deploying AI systems in hiring, credit scoring, law enforcement, critical infrastructure, or education within the EU need to demonstrate compliance through formal conformity assessments, CE marking, and EU database registration.

There's a proposal to postpone the high-risk rules to December 2027, but it hasn't been confirmed. Planning for August 2026 remains the prudent approach. For organisations outside the EU, the extraterritorial scope means you're covered if your AI system's output is used within the EU — similar to GDPR's reach.

Key Developments

  • Mar 2026: August 2, 2026 enforcement date for high-risk AI system requirements confirmed. Possible postponement to Dec 2027 proposed but not ratified.
  • Feb 2026: AI Office publishes conformity assessment guidance and CE marking procedures for high-risk systems.
  • Jan 2026: Prohibited AI practices enforcement began (emotion recognition in workplaces, social scoring, certain biometric uses).

What to Watch

The postponement debate is the immediate signal. If confirmed, it gives organisations 18 additional months. If not, August 2026 is five months away. Watch for enforcement actions after the prohibited practices rules took effect in January — early enforcement patterns will signal how aggressively the AI Office interprets the regulation. Also track how the Act interacts with general-purpose AI model requirements, which affect providers of foundation models regardless of downstream use case.

Strengths

  • Regulatory clarity: Risk-based classification provides a clear framework for assessing AI system obligations.
  • Global influence: GDPR set the template for data protection globally. The AI Act is likely to have similar regulatory ripple effects.
  • Consumer protection: Mandatory transparency, human oversight, and accuracy requirements for high-risk systems protect end users.

Considerations

  • Compliance cost: Conformity assessments, technical documentation, and ongoing monitoring add significant cost to AI deployments.
  • Classification ambiguity: Determining whether your AI system qualifies as "high-risk" requires legal interpretation that isn't always straightforward.
  • Pace of change: AI capabilities evolve faster than regulatory frameworks can adapt. Requirements defined in 2024 may not fit 2026 technology.
  • Enforcement uncertainty: How aggressively the AI Office enforces the regulation is unknown. Early enforcement actions will set the tone.

More in Regulation & Practice

EU AI Act· ISO 42001 / AI Governance

Back to AI Radar