AI Act high‑risk systems are artificial intelligence systems classified as high risk under Article 6 of the European Union Artificial Intelligence Act (EU AI Act) because of their potential impact on individuals’ fundamental rights, safety, or access to essential services.
These systems are permitted but subject to prescriptive, role‑specific obligations depending on whether an organization acts as a provider or a deployer.
Under the EU Artificial Intelligence Act (EU AI Act), high-risk systems are AI applications listed in Annex III of the regulation or systems that function as safety components of regulated products. Classification is based on use case and context, not the underlying model or technique.
High‑risk use cases include AI systems used in:
High‑risk AI systems are not prohibited, but they must meet mandatory governance, risk management, and transparency requirements before and after deployment.
High‑risk systems sit at the intersection of AI innovation and societal trust. The EU AI Act establishes clear guardrails to ensure that these systems:
For organizations, effective management of high‑risk AI systems is essential to scaling AI responsibly, maintaining regulatory confidence, and protecting enterprise reputation while enabling continued innovation.
Management of high‑risk AI systems is role‑dependent and extends across the full AI lifecycle.
Providers of high‑risk AI systems are required to:
Deployers of high‑risk AI systems are required to:
In practice, organizations operationalize these requirements by embedding continuous governance controls across development, deployment, and runtime operation—ensuring compliance is sustained, not point‑in‑time.
OneTrust helps organizations operationalize compliance for AI Act high-risk systems by enabling risk classification, documentation, and governance workflows. The OneTrust AI Governance solution supports AI transparency, accountability, and monitoring in alignment with the EU AI Act’s requirements.
An AI system is classified as high‑risk if it is used in one of the contexts listed in Annex III of the EU AI Act or functions as a safety component of a regulated product, and its use can materially affect individuals’ rights, safety, or access to essential services.
No. High-risk systems are allowed but must comply with obligations related to data governance, transparency, documentation, and human oversight to mitigate risks.
Organizations should conduct AI impact assessments (AIIAs), maintain detailed technical documentation, ensure fairness testing, and implement governance measures throughout the AI lifecycle.