Skip to main content

On-demand webinar coming soon...


On-demand webinar coming soon...

AI Act High-risk Systems

AI Act high‑risk systems are artificial intelligence systems classified as high risk under Article 6 of the European Union Artificial Intelligence Act (EU AI Act) because of their potential impact on individuals’ fundamental rights, safety, or access to essential services.

These systems are permitted but subject to prescriptive, role‑specific obligations depending on whether an organization acts as a provider or a deployer.


What are AI Act High-risk Systems?

Under the EU Artificial Intelligence Act (EU AI Act), high-risk systems are AI applications listed in Annex III of the regulation or systems that function as safety components of regulated products. Classification is based on use case and context, not the underlying model or technique.

High‑risk use cases include AI systems used in:

  • Biometric identification and categorization
  • Education and vocational training (e.g., admissions, testing, student assessment)
  • Employment and workforce management (e.g., recruitment, promotion, termination)
  • Access to essential private and public services (e.g., creditworthiness, social benefits)
  • Law enforcement
  • Migration, asylum, and border control
  • Administration of justice and democratic processes
  • Critical infrastructure

High‑risk AI systems are not prohibited, but they must meet mandatory governance, risk management, and transparency requirements before and after deployment. 

Why AI Act High-risk Systems Matter 

High‑risk systems sit at the intersection of AI innovation and societal trust. The EU AI Act establishes clear guardrails to ensure that these systems:

  • Do not create unlawful discrimination or unfair outcomes
  • Remain explainable and subject to human oversight
  • Operate on appropriate, well‑governed data
  • Can be audited and corrected over time

For organizations, effective management of high‑risk AI systems is essential to scaling AI responsibly, maintaining regulatory confidence, and protecting enterprise reputation while enabling continued innovation.

How AI Act High-risk Systems Are Managed in Practice 

Management of high‑risk AI systems is role‑dependent and extends across the full AI lifecycle.

Providers of high‑risk AI systems are required to:

  • Establish and maintain a risk management system covering design, development, testing, and post‑market monitoring
  • Implement data governance controls to ensure training, validation, and testing data is relevant, representative, and traceable
  • Produce technical documentation and maintain automatic logging for auditability
  • Design human oversight mechanisms that enable meaningful intervention
  • Register high‑risk systems in the EU database prior to market placement

Deployers of high‑risk AI systems are required to:

  • Use systems in accordance with provider instructions and documented limitations
  • Conduct AI impact assessments (AIIAs) where required to evaluate risks to individuals’ rights and freedoms. Under the EU AI Act, the assessment for high-risk AI systems is called a Fundamental Rights Impact Assessment.
  • Ensure appropriate human oversight in operational use
  • Monitor system performance and report serious incidents or malfunctioning
  • Maintain records demonstrating compliant use over time

In practice, organizations operationalize these requirements by embedding continuous governance controls across development, deployment, and runtime operation—ensuring compliance is sustained, not point‑in‑time. 

Related Laws & Standards 

How OneTrust helps With AI Act High-risk Systems 

OneTrust helps organizations operationalize compliance for AI Act high-risk systems by enabling risk classification, documentation, and governance workflows. The  OneTrust AI Governance solution supports AI transparency, accountability, and monitoring in alignment with the EU AI Act’s requirements.

Explore Solutions → 
 

FAQs About AI Act High-risk Systems

An AI system is classified as high‑risk if it is used in one of the contexts listed in Annex III of the EU AI Act or functions as a safety component of a regulated product, and its use can materially affect individuals’ rights, safety, or access to essential services.

No. High-risk systems are allowed but must comply with obligations related to data governance, transparency, documentation, and human oversight to mitigate risks.

Organizations should conduct AI impact assessments (AIIAs), maintain detailed technical documentation, ensure fairness testing, and implement governance measures throughout the AI lifecycle.

 

Related Glossary Terms


You May Also Like