The EU AI Act has created four different risk levels to characterize the use of AI systems. Learn more about each, and how it can impact the use of AI in your organization
Laurence McNally
Product Manager, OneTrust AI Governance
November 30, 2023
The proposed EU AI Act takes a comprehensive approach to regulating artificial intelligence, laying down obligations for providers and deployers in an effort to ensure safe and ethical use of AI technology. To do this, the EU has proposed the first regulatory framework for AI, including categorizing risk levels for different systems.
The draft EU AI Act breaks down risk for AI systems into four different categories:
This category bans AI systems that are clear threats to human safety or rights. For example, toys with voice assistance promoting dangerous behavior or social scoring by governments that might lead to discrimination are considered unacceptable.
The draft EU AI Act considers AI systems that pose a threat to human safety or fundamental rights to be high risk. This can include systems used in toys, aviation, cars, medical devices, and elevators – all products that fall under the EU’s product safety regulations.
High risk systems can also include things like critical infrastructures (like transport systems) where a malfunction could endanger lives. But aside from physical safety, these risk levels are also designed to help protect human rights and quality of life; for instance, scoring exams or sorting resumes with AI is considered a high-risk system, as it could impact someone’s career path and future.
Various law enforcement activities are also considered high risk, like evaluating the reliability of evidence, verifying travel documents at immigration control, and remote biometric identification.
This level targets AI systems with specific transparency needs, like using a chatbot for customer service. Your users have to be aware that they’re interacting with a machine and be given the opportunity to opt out and speak to a human instead. Limited risk systems rely on transparency and informed consent of the user, as well as giving them the easy option to withdraw.
Minimal risk is the lowest level of risk set forth in the AI Act, and refers to applications, like AI-powered video games or email spam filters.
Recently, the EU Parliament introduced amendments that address Generative AI and impose additional transparency rules for its use. Negotiations between the European Parliament, Council, and Commission have also focused on the tiering of foundation models, and so we may see additional clarifications around their use.
AI tools like large language model-backed chatbots (think ChatGPT) may have to follow additional rules like revealing that the content was produced using AI, ensuring that the model isn’t creating illegal content, and publishing summaries of copyrighted data used for training.
It’s one thing to see the EU AI Act’s risk levels laid out, but it’s another to understand how they fit into your daily business operations.
Unacceptable risk systems, as the name suggests, are prohibited by the AI Act, and therefore can’t be used by anyone in your organization. A tool like OneTrust AI Governance will automatically flag and reject any systems that are categorized in this level of risk, protecting your organization and freeing up your team’s time for other reviews.
On the other end of the risk spectrum, limited and minimal risk systems can be automatically approved by your AI Governance tool, allowing your broader team to move forward with their project and continue to innovate with AI.
In both these risk cases, the decision of whether to approve or deny use of the system can be done automatically by your tool, as the guidelines are clear either way. Where things get less clear is when it comes to high-risk systems.
Systems that are deemed high risk aren’t automatically banned under the draft EU AI Act, but they do have more requirements that need to be met before that system can be deployed. These requirements demonstrate that the technology and its use doesn’t pose a significant threat to health, safety, and fundamental human rights.
Developers of AI systems determine their system’s risk category themselves using standards set forth by the EU. Once the systems determined high risk are in use, the deployers have responsibilities for ongoing compliance, monitoring, human oversight, and transparency obligations once they decide to put a high-risk system to use.
This ongoing compliance and monitoring can take a lot of manual labor, so finding ways to automate these reviews will save your team a lot of time. OneTrust flags high-risk systems for manual review, so they aren’t automatically rejected, but your compliance team does need to take the time for due diligence to decide if using the system – and taking on the additional operational responsibility – is worth it, or if it would be better to pursue a different system instead.
To see how a real project might move forward using an AI system and an AI Governance tool, here’s a practical example.
Suppose your marketing team wants to use OpenAI’s GPT-4 to create personalized marketing emails. This is the project initialization phase, where a team identifies a use case for an AI system and needs to get it approved.
Your compliance team would then need to conduct an assessment to determine if the system makes sense and is safe to use. OneTrust offers these accessible and concise assessments, where the project owner can lay out their goals and intentions.
From there, the project needs to be assigned a risk categorization. OneTrust automates this process by assessing the project and automatically assigning a risk level, as explained above.
Depending on the level of risk assigned, your team can then expedite deployment of their project. In this case, the use of GPT-4 has been deemed low-risk by the AI Governance tool, and is automatically approved for the marketing team to move forward with their project.
The OneTrust AI Governance solution offers more than just compliance with the EU AI Act. It's a complete solution for overseeing AI activities in your organization.
We help you innovate faster without neglecting safety, whether you're dealing with the EU's regulations or other governance challenges.
Request a demo today to explore how OneTrust can play an integral role in your AI journey.
Webinar
In this webinar, we’ll explore how OneTrust helps organizations meet EU AI Act compliance by operationalizing AI governance frameworks.
White Paper
Download this white paper to learn how to adapt your data governance program, by defining AI-specific policies, monitoring data usage, and centralizing enforcement.
Report
Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, The Gartner® Report
Webinar
Join our webinar and learn about the EU AI Act's enforcement requirements and practical strategies for achieving compliance and operational readiness.
Webinar
Watch this webinar for insights on ensuring responsible data use while building effective AI and privacy programs.
Resource Kit
Download this resource kit to help you understand, navigate, and ensure compliance with the EU AI Act.
Webinar
Join out webinar to hear about the challenges and solutions in AI governance as discussed at the IAPP conference, featuring insights and learnings from our industry thought leadership panel.
Webinar
In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.
Report
In this 5-part regulatory article series, OneTrust sponsored the IAPP to uncover the legal frameworks, policies, and historical context pertinent to AI governance across five jurisdictions: Singapore, Canada, the U.K., the U.S., and the EU.
Webinar
In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.
Webinar
In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.
Webinar
In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states.
Webinar
In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.
Webinar
In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.
Checklist
Managing third-party risk is a critical part of AI governance, but you don’t have to start from scratch. Use these questions to adapt your existing vendor assessments to be used for AI.
Webinar
In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.
Webinar
In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.
Webinar
Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.
Webinar
Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC.
eBook
Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.
Infographic
A Conformity Assessment is the process of verifying and/or demonstrating that a “high- risk AI system” complies with the requirements of the EU AI Act. Download the infographic for a step-by-step guide to perform one.
eBook
With the use of AI proliferating at an exponential rate, the EU rolled out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.
White Paper
What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team.
Webinar
Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.