Updated 10 January 2026

After helping dozens of companies navigate EU AI Act compliance over the past year, I’ve noticed something rather troubling: nearly 60% of teams I work with initially misclassify their AI systems. The consequences are expensive – late redesigns, failed audits, and scrambled compliance efforts just months before enforcement deadlines.

The problem isn’t that high-risk classification is particularly complex. It’s that most teams approach it backwards, starting with their technology instead of their impact.

Why High-Risk System Classification Matters More Than You Might Think

Here’s what happens when you get high-risk classification wrong: everything else becomes exponentially harder. Miss the mark, and you’re suddenly dealing with risk management frameworks, detailed documentation requirements, human oversight protocols, and post-market monitoring – all retrofitted onto a system that wasn’t designed for these obligations.

I’ve seen engineering teams spend months rebuilding logging infrastructure because they assumed their ‘simple recommendation engine’ wouldn’t need compliance controls. Turns out, when your recommendations determine who gets hired or approved for loans, the EU AI Act has rather strong opinions about your system.

The classification question sounds straightforward: ‘Is our AI system high-risk?’ But the real question regulators want answered is deeper: ‘Does this system meaningfully influence decisions that could harm people?’

The Two Paths to High-Risk Territory

The EU AI Act creates two main routes to high-risk classification, and both focus on impact rather than technology.

First, there’s AI embedded in products already covered by EU regulations – medical devices, aviation systems, industrial machinery. If your AI is part of a pacemaker or helps control railway signals, you’re automatically in high-risk territory. These cases are usually obvious.

The trickier category is Annex III systems: AI used for recruitment, credit decisions, access to public services, biometric identification, law enforcement support, and educational assessment. What makes these tricky is that the same underlying technology can be high-risk in one context and perfectly fine in another.

Take a language model. Use it to write marketing copy? Probably not high-risk. Use it to screen job applications or assess student essays? Now we need to talk about compliance frameworks.

Real-World Examples: When the Same Tech Becomes High-Risk

Let me share some actual scenarios I’ve encountered that illustrate how context transforms classification:

Computer Vision in Retail vs Security A high-street retailer uses computer vision to count footfall and analyse shopping patterns. Same technology deployed by a security firm to identify individuals for building access? The first is routine analytics; the second is biometric identification under Annex III.

Chatbots: Customer Service vs Mental Health A telecoms company’s customer service chatbot that handles billing queries operates in low-risk territory. Deploy similar natural language processing to provide mental health support or counselling? You’re now dealing with decisions that could affect someone’s wellbeing and safety.

Recommendation Engines: E-commerce vs Employment Amazon’s product recommendations don’t fall under high-risk classification. But when a recruitment platform uses similar collaborative filtering algorithms to suggest candidates to hiring managers, we’re in Annex III territory because those suggestions directly influence employment opportunities.

The Real High-Risk System Classification Test

After working through hundreds of these assessments, I’ve learnt that certain questions cut through the confusion faster than others:

Does your AI system influence real decisions about real people? Not just ‘provide information’ or ‘assist users’, but actually shape outcomes that matter to someone’s life, work, or opportunities?

Would a mistake by your system cause genuine harm? Not minor inconvenience, but something that could affect someone’s rights, safety, or access to important services?

Is the human oversight meaningful, or is it just rubber-stamping AI recommendations? I’ve seen far too many systems with nominal human review where the human never actually overrides the AI.

When we built the assessment framework for EYREACT, these became our core evaluation criteria because they map directly to regulatory concerns. Regulators care about power and consequences, not algorithms and APIs.

Industry-Specific High-Risk Classification Scenarios

Different sectors face distinct classification challenges. Here’s how high-risk determination plays out across industries:

Financial Services

  • High-risk: Credit scoring algorithms, fraud detection systems that block transactions, algorithmic trading systems affecting customer portfolios, insurance claim processing automation
  • Likely not high-risk: Market research analytics, internal risk assessment tools (not customer-facing), back-office process automation

Healthcare and Life Sciences

  • High-risk: Diagnostic support systems, treatment recommendation engines, patient triage algorithms, clinical trial participant selection
  • Likely not high-risk: Administrative scheduling systems, research data analysis (not affecting patient care), billing and coding automation

Human Resources and Recruitment

  • High-risk: CV screening algorithms, interview scoring systems, performance evaluation tools, workforce planning that affects redundancies
  • Likely not high-risk: Internal training recommendation systems, office resource allocation, basic HR analytics without direct employee impact

Education Technology

  • High-risk: Automated essay grading, student assessment algorithms, admission decision support systems, plagiarism detection affecting grades
  • Likely not high-risk: Learning analytics for curriculum development, administrative systems, resource recommendation engines

Where Teams Go Wrong

The most common mistake I see is what I call ‘technology tunnel vision’. Teams get fixated on their model architecture, API calls, or training data instead of stepping back to examine their system’s actual role in decision-making.

I worked with one startup that spent weeks arguing their hiring tool wasn’t high-risk because ‘it’s just filtering CVs, not making hiring decisions’. But when we mapped their actual workflow, the filtered results drove 90% of interview invitations. That’s not filtering – that’s decision-making with a thin layer of human approval.

Another pattern: responsibility confusion in multi-party deployments. One company builds the AI, another integrates it, a third deploys it to end users. Everyone assumes someone else is handling compliance. When audit time comes, no one has proper documentation or controls in place.

Common High-Risk Misclassification Patterns I’ve Observed

Here are the most frequent classification errors I encounter:

The “It’s Just Analytics” Fallacy
Teams assume that because they’re generating insights rather than making decisions, they’re in the clear. But when those insights directly inform decisions about individuals – like flagging someone as a fraud risk or highlighting underperforming employees – you’re in decision-support territory.

The “Human in the Loop” Myth
Having a human review AI outputs doesn’t automatically reduce risk classification. I’ve seen systems where humans approve 99.5% of AI recommendations without meaningful review. Regulators will examine whether human oversight is substantive or ceremonial.

The “We’re Just the Platform” Defence
Platform providers often claim they’re not responsible because customers control how the AI is used. But if you’re providing AI specifically designed for high-risk use cases – like recruitment screening or credit assessment – you can’t simply pass all responsibility to your customers.

The “API Dependency” Confusion
Using someone else’s foundation model doesn’t eliminate your obligations. You’re responsible for how you deploy and integrate that model, regardless of who built the underlying technology.

Sector-by-Sector High-Risk Assessment Guide

Based on my work across different industries, here’s a practical breakdown of common high-risk scenarios:

FinTech and Banking:

  • Personal loan approval algorithms
  • Investment advice robo-advisors
  • Fraud detection systems that freeze accounts
  • Credit limit adjustment automation
  • Insurance premium calculation engines
  • Mortgage application processing systems

Healthcare and MedTech:

  • Clinical decision support tools
  • Patient monitoring alert systems
  • Medical imaging analysis software
  • Drug interaction checking systems
  • Treatment pathway recommendation engines
  • Clinical trial matching platforms

HR Technology:

  • Applicant tracking systems with scoring
  • Employee performance evaluation tools
  • Redundancy selection algorithms
  • Promotion recommendation systems
  • Workplace monitoring and assessment tools
  • Skills gap analysis affecting career progression

The Solution Isn’t Complicated

The solution isn’t complicated, but it requires discipline. Document your classification reasoning clearly, assign responsibility explicitly, and review your assessment whenever your system evolves. What starts as a simple recommendation feature can gradually become a decision-making system without anyone noticing.

I recommend creating what I call a ‘classification audit trail’ – a document that captures:

  • Initial assessment reasoning: Why you classified the system as you did
  • Key decision points: What factors pushed the system towards or away from high-risk status
  • Evidence and examples: Specific use cases and workflows that informed your decision
  • Review triggers: What changes would require reassessment
  • Responsibility mapping: Who owns compliance obligations in your organisation

Beyond High-Risk Classification: What Actually Changes

High-risk classification isn’t just a label – it’s a commitment to a different way of building and operating AI systems. You’re committing to risk management processes, data governance standards, testing protocols, human oversight mechanisms, cybersecurity measures, logging requirements, post-market monitoring, and comprehensive documentation.

This is where many teams discover that compliance isn’t a tick-box exercise. It’s an ongoing operational discipline that needs to be embedded into development workflows, not bolted on afterwards.

The Compliance Timeline Reality Check

Here’s what most teams underestimate about high-risk AI compliance:

Months 1-2: Assessment and Documentation

  • Detailed system mapping and impact assessment
  • Risk management framework development
  • Initial documentation and evidence gathering

Months 3-4: Technical Implementation

  • Logging and monitoring infrastructure
  • Data governance and quality controls
  • Testing and validation protocols

Months 5-6: Operational Integration

  • Human oversight procedures
  • Incident response protocols
  • Post-market monitoring systems

Ongoing: Maintenance and Review

  • Regular compliance reviews
  • System change management
  • Regulatory update monitoring

At eyreACT, we’ve built these requirements into guided workflows specifically because manual compliance tracking breaks down at scale. When you’re managing evidence across multiple development stages and risk categories, structured approaches aren’t just helpful – they’re essential.

Getting High-Risk Classification Right From The Start

The EU AI Act enforcement deadline is August 2026. That sounds like plenty of time, but compliance frameworks take months to implement properly. Teams that wait until 2026 to address classification will find themselves making expensive compromises.

Start with a structured assessment now, whilst you still have time to make thoughtful design decisions. Document your reasoning clearly, because regulators will ask for it. And remember: the goal isn’t to avoid being high-risk – it’s to build systems that are appropriately governed for their impact.

Final Recommendations

If you’re building AI that influences important decisions about people’s lives, embrace the high-risk classification and the responsibilities that come with it. Your users, your auditors, and ultimately your business will be better for it.

Consider these practical next steps:

  1. Conduct a systematic assessment using structured criteria aligned with EU AI Act requirements
  2. Document your reasoning in detail, with specific examples and evidence
  3. Map your compliance obligations across the full AI lifecycle
  4. Establish review processes for ongoing classification accuracy
  5. Build compliance into your development workflow rather than treating it as an afterthought

The companies that get ahead of this now will have a significant competitive advantage when enforcement begins. Those that wait will find themselves scrambling to catch up whilst their better-prepared competitors are already scaling compliant systems.


Need help assessing your AI systems? Our free classification questionnaire walks through the key evaluation criteria and provides documentation templates for your compliance files.

Website |  + posts

Leave a Reply

Your email address will not be published. Required fields are marked *