EU AI Act 'High-Risk' AI Systems: Real-World Enterprise Examples
What makes an AI system 'High-Risk' under Annex III of the EU AI Act? Real-world enterprise examples and compliance requirements for deployers.
Key Takeaways
- ✓Annex III of the EU AI Act defines eight specific domains where AI is automatically classified as High-Risk — no exemptions, no self-assessment.
- ✓If your enterprise uses AI for recruitment, credit scoring, critical infrastructure, or biometric identification, you face the Act's heaviest regulatory burden.
- ✓Deployers must implement human oversight, automated logging, bias testing, and quality management systems before August 2026.
- ✓Non-compliance penalties reach €35 million or 7% of global annual turnover.
Your AI Is Probably High-Risk. Here's How to Know.
The EU AI Act's risk-based classification system places most of its regulatory weight on one category: High-Risk. Not "Unacceptable Risk" (which is simply banned). Not "Limited Risk" (which requires only transparency labels). High-Risk — the category that demands full conformity assessment, continuous monitoring, and mandatory technical documentation before deployment.
The Act's intent is clear: if your AI system influences decisions about people's fundamental rights — their employment, credit, education, healthcare, or freedom — it must be provably safe, transparent, and unbiased.
Here is where most enterprises get surprised: the scope is far broader than they expect.
Four Enterprise Scenarios That Trigger High-Risk Classification
1. The HR Screening AI
Scenario: A multinational corporation receives 10,000 CVs monthly. They deploy an AI agent to parse CVs, rank candidates based on historical hiring patterns, and auto-advance the top 5% to interviews.
Classification: High-Risk — Annex III, Section 4: AI used for recruitment, candidate screening, CV filtering, task allocation, or performance evaluation.
Compliance requirements:
- Prove the model does not discriminate against protected characteristics (gender, ethnicity, age) — referred to as "bias testing" under Article 10.
- Maintain automated logs explaining exactly why each candidate was ranked, advanced, or rejected.
- Provide applicants a right to explanation when they are subject to automated decision-making.
2. The Bank Loan Engine
Scenario: A European retail bank uses a machine learning model to evaluate creditworthiness and determine interest rates — or deny applications outright.
Classification: High-Risk — Annex III, Section 5b: AI evaluating credit scores or pricing in health/life insurance.
Compliance requirements:
- Implement stringent Human-in-the-Loop (HITL) overrides. If an applicant challenges a denial, the bank must halt the AI and have a human interpret the model's deterministic reasoning log.
- Demonstrate that training data does not encode historical socioeconomic biases that produce discriminatory outcomes.
- Register the system in the EU AI Database before deployment.
3. The Energy Grid Predictor
Scenario: A utility company uses AI to predict network congestion and autonomously route electricity from regional wind farms during demand spikes.
Classification: High-Risk — Annex III, Section 2: AI components in critical infrastructure management (electricity, gas, water, heating, digital infrastructure).
Compliance requirements:
- Undergo a Conformity Assessment (Article 43) prior to deployment — proving a hallucination or adversarial attack cannot trigger cascading infrastructure failure.
- Implement mandatory security sandboxing to isolate AI decision-making from direct grid control systems.
- Maintain continuous risk management documentation throughout the system's operational lifetime.
4. The University Admissions Bot
Scenario: A university uses an LLM to read applicant essays and assign a preliminary score to help the admissions board process thousands of applications.
Classification: High-Risk — Annex III, Section 3: AI determining access to educational institutions or evaluating learning outcomes.
Compliance requirements:
- Transparently inform all applicants that their essays are being scored by AI (Article 13 transparency obligations).
- Maintain a Quality Management System (QMS) ensuring training data is diverse, representative, and free of systemic bias.
- Conduct ongoing accuracy monitoring post-deployment.
What Deployers Must Do Before August 2026
If your use case falls into Annex III, you cannot legally rely on consumer AI APIs (ChatGPT, Claude) or infrastructure that doesn't natively support structured logging. The Act mandates continuous risk management — not one-time compliance.
Architectural requirements for high-risk deployments:
- Automated Event Logging: The system must record every execution, input, and output natively — with cryptographic integrity for audit purposes (Article 12).
- Human Oversight: The interface must allow a human operator to override or shut down the AI immediately (Article 14).
- Data Governance: Total control over training data provenance and inference telemetry, with documentation requirements under Article 10.
- Sovereign Infrastructure: Full assurance that operational metadata, model weights, and processing telemetry remain under European legal jurisdiction — not subject to foreign CLOUD Act access requests.
NeuroCluster is a European sovereign AI platform engineered specifically for high-risk deployments. The platform provides out-of-the-box audit trails, HITL workflow gates, ephemeral MicroVM sandboxes, and conformity-assessment-ready logging — so your engineering team can focus on building the AI system, not building the compliance infrastructure.
See how sovereign AI works in practice
Explore the NeuroCluster Innovation Center — a structured programme for moving AI from pilot to compliant production.
Explore the Innovation Center Programme →