The EU AI Act has been a long time coming. What do we know so far, and what does it mean for your organization?
Lauren Diethelm, AI Content Marketing Specialist, OneTrust
December 11, 2023
After months of deliberation, and a 3-day final trilogue, the EU reached a deal on the EU AI Act on December 8th. Designed to be broad and industry-agnostic, the AI Act aims to strike a balance between protecting innovative AI and ensuring that peoples’ safety and fundamental human rights are protected when they’re interacting with AI systems.
The EU AI Act takes a risk-based approach to regulating AI and creates categories including the following: unacceptable, high, minimal, and low risk. Unacceptable risk systems are prohibited by the AI Act and can’t be used. Most minimal and low-risk systems, like email spam filters, can be used without any additional safeguards in place.
The Act also lays out specific guidelines for generative AI (GenAI) systems, such as additional transparency requirements, including disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of any copyright data that was used for training the model.
Systems categorized as high-risk by the AI Act can be used, but they have additional requirements they need to meet before they can enter the market or be widely used. For example, high-risk systems may have to undergo conformity assessments.
Conformity assessments determine whether a high-risk AI system has met additional requirements, which are:
Conformity assessments must be completed before an AI system is put on the market and available for public use. Once the assessment is complete and the system is widely available, conformity assessments should be performed again any time the system goes through substantial changes.
These assessments are typically performed by providers or developers of high-risk AI systems. But in certain cases, they may be conducted by responsible actors, which may be a distributor, importer, deployer, or other third party.
In the evolving landscape of AI regulation, the Act encountered a significant hurdle in early November concerning the regulation of foundation models, which are central to generative AI systems and utilize data from diverse internet sources. Initially, there was consensus on a tiered regulatory framework, applying stricter rules to more potent systems. However, during the trilogue meeting on November 10, major stakeholders like Germany, France, and Italy opposed any special regulation for foundation models. They argued that imposing tiered regulations on certain systems could inhibit innovation and undermine the Act's overall risk-based methodology.
Despite these challenges, legislation has progressed to include the dynamic nature of general-purpose AI (GPAI) systems and their integration into high-risk areas. This not only covers the various uses of GPAI systems but also their specific applications.
Specifically for foundation models, new rules have been set. Renowned for their versatility in performing complex functions such as generating videos, texts, images, and engaging in advanced language interactions, these models are now subject to strict transparency requirements before they enter the market. The regulations are particularly stringent for 'high impact' foundation models, characterized by their large-scale data training, complex functionalities, and superior performance, which could pose systemic risks in multiple sectors.
The regulation of general AI systems has also been refined to accommodate their wide range of capabilities and rapid development. GPAI systems and their underlying models are now mandated to comply with transparency guidelines as initially proposed by the Parliament. This includes the creation of technical documentation, adherence to EU copyright laws, and the provision of detailed summaries of the training content used.
For GPAI models identified as having high systemic risk, the Parliament has secured more rigorous regulations. These models are required to undergo thorough evaluations, address and mitigate systemic risks, perform adversarial testing, report serious incidents to the Commission, ensure strong cybersecurity, and report on their energy efficiency. Until EU-wide standards are established, GPAI systems at risk of causing systemic issues may adhere to existing codes of practice as a means of regulatory compliance.
Though critical for protecting the safety and fundamental rights of people using AI systems, the AI Act does represent one more piece in the complex puzzle of regulations that companies will have to make sure they comply with.
For companies not operating in the EU, given the extraterritorial effect of the Act, compliance is still a concern. Since the Act is so comprehensive, it can serve as a guiding light for US companies looking to get ahead of the AI governance and compliance curve, just as many companies have been using the GDPR for years.
If you’re not a provider of AI systems, the responsibility of conducting conformity assessments on high-risk systems likely won’t fall to you. Where you will have obligations is making sure you have visibility within your own organization and understand where AI is being used in your business. From there, you can begin to educate employees about risk, offer responsible use policies, and monitor systems for significant changes that may impact their risk categorization.
It’s important to note that the EU AI Act isn’t quite a done deal just yet. There’s still technical work to be done regarding ironing certain provisions and stipulations in the Act, and it’s yet to be voted on by the Council or Parliament.
Understanding where AI models are used in your organization and what risk level they fall into is a key part of setting up your AI governance program under the guidelines of the EU AI Act. With OneTrust AI Governance, you can easily maintain your inventory of AI systems across your business.
To get started with your AI governance program and to learn how OneTrust can help you through the process, request a demo today.
Webinar
This webinar will explore the key privacy pitfalls organizations face when implementing GenAI, focusing on purpose limitation, data proportionality, and business continuity.
Report
Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, The Gartner® Report
Webinar
This webinar unpacks California’s approach to AI and emerging legislations, including legislation on defining AI, AI transparency disclosures, the use of deepfakes, generative AI, and AI models.
eBook
Download this eBook to explore strategies for trustworthy AI procurement and learn how to evaluate vendors, manage risks, and ensure transparency in AI adoption.
Webinar
Join our webinar and learn about the EU AI Act's enforcement requirements and practical strategies for achieving compliance and operational readiness.
Video
Learn how OneTrust AI Governance acts as a unified program center for AI initiatives so you can build and scale your AI governance program
Webinar
Watch this webinar for insights on ensuring responsible data use while building effective AI and privacy programs.
Webinar
Discover the EU AI Act's impact on your business with our video series on its scope, roles, and assessments for responsible AI governance and innovation.
Resource Kit
Download this resource kit to help you understand, navigate, and ensure compliance with the EU AI Act.
Webinar
In this webinar, we'll navigate the intricate landscape of AI Governance, offering guidance for organizations whether they're developing proprietary AI systems or procuring third-party solutions.
eBook
Discover the ISO 42001 framework for ethical AI use, risk management, transparency, and continuous improvement. Download our guide for practical implementation steps.
Webinar
Join out webinar to hear about the challenges and solutions in AI governance as discussed at the IAPP conference, featuring insights and learnings from our industry thought leadership panel.
Webinar
Colorado has passed landmark legislation regulating the use of Artificial Intelligence (AI) Systems. In this webinar, our panel of experts will review best practices and practical recommendations for compliance with the new law.
Webinar
In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.
Report
In this 5-part regulatory article series, OneTrust sponsored the IAPP to uncover the legal frameworks, policies, and historical context pertinent to AI governance across five jurisdictions: Singapore, Canada, the U.K., the U.S., and the EU.
Webinar
In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.
Webinar
This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.
Webinar
In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.
Resource Kit
What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.
White Paper
Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.
Webinar
In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states.
In-Person Event
Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.
Webinar
In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.
Webinar
In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.
Webinar
Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.
Webinar
Join our panel of expert privacy professionals as they dissect the key happenings in 2023 and how privacy professionals can approach what may occur in 2024.
Webinar
In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.
Webinar
OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.
Report
OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.
Webinar
In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks.
Infographic
A Conformity Assessment is the process of verifying and/or demonstrating that a “high- risk AI system” complies with the requirements of the EU AI Act. Download the infographic for a step-by-step guide to perform one.
eBook
With the use of AI proliferating at an exponential rate, the EU rolled out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.
Webinar
Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.
White Paper
What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team.
Webinar
Stay ahead of US privacy laws as we explore the lessons learned from CCPA and FTC enforcement and how AI is effecting the regulatory landscape.
Infographic
AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.
White Paper
Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.
eBook
We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.
Webinar
Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.
Webinar
Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.
White Paper
With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.
Webinar
Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.
Regulation Book
Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.
Webinar
Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.