Navigating the Draft EU AI Act

The ongoing trilogue in the EU around finalizing the EU AI Act continues to inch closer to a decision

Param Gopalasamy
Content Marketing Specialist, CIPP/E, CIPP/US, CIPM
November 13, 2023

EU flags in front of the EU Commission Building

A horizontal approach: Standing apart on the global stage 

In crafting its approach to artificial intelligence legislation, the lawmakers of the European Union (EU) have opted for a horizontal legislative framework as the ongoing trilogue looks to finalize a regulation. The EU’s draft Artificial Intelligence Act (EU AI Act) provides an AI legal framework that embraces an industry-agnostic perspective, a general approach, and is meticulously designed with nearly a hundred articles. 

Here, we’ll look to provide a window into the draft EU AI Act. This piece of legislation is not just the first of its kind—but also a potential benchmark for global AI regulation, developed to help create a precedent in the rapidly evolving AI landscape. 

Guarding values, fueling innovation 

The EU AI Act is carefully balanced. It’s not just about throwing a safety net around society, economy, fundamental rights, and the bedrock values of Europe that might be at risk due to AI systems; it’s also a nod to the power and potential of AI innovation, with built-in safeguards designed to promote and protect inventive AI strides. It looks to strike the balance of risk management and protecting critical infrastructure from potential pitfalls, while promoting the innovations that general-purpose AI can bring with it.

Crafting the EU AI Act has been anything but a walk in the park, with the definition of AI being one of the contentious corners. Since its inception proposal in April 2021, the Act has been a living document, seeing numerous iterations, each amendment reflecting the fluid discourse around AI technology and its implications for society. 

AI: Breaking down the concept 

Machine learning, the basis of AI systems and the building blocks of AI algorithms, are defined by the EU AI Act as “including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning”. The complexity of AI systems is a sliding scale, with more intricate systems requiring substantial computing power and input data. The output from these systems can be simple or mightily complex, varying with the sophistication of the AI in play. 

This broad definition covers a range of technologies and uses of AI, from your everyday chatbots to highly sophisticated generative AI models, such as ChatGPT. But it’s important to note that not every AI system falling under the Act’s broad definition will be regulated. The Act plays it smart with a risk-based approach, bringing under its regulatory umbrella only those systems associated with specific risk levels. 

AI regulation: Calibrated to risk 

Here’s where it gets interesting. The EU AI Act has different baskets for AI systems. Some are seen as posing an unacceptable risk to European values, leading to their prohibition. High-risk systems, while not banned, have to dance to a tighter regulatory tune. It’s vital to remember that these risk categories aren't static; the Act is still in a draft stage, and as more changes come, these risk categories will likely be fine-tuned as well. 

EU AI Act risk levels 

The EU AI Act defines multiple levels of permissible risk: high risk, limited risk, and minimal risk. 

These are the levels of “permissible risk” that are allowed by organizations, however, “unacceptable risk” is a risk level which is not allowed by organizations at which point companies need to change their models accordingly.  


  • Unacceptable Risk — Social scoring systems, real-time remote biometric verification.

  • High Risk — Credit scoring systems, automated insurance claims . 

    For processes that fall into this bucket, companies need to conduct a conformity assessment and register it with an EU database before the model is available to the public. 

    Apart from this, these high-risk processes require detailed logs and human oversight as well.  

  • Limited Risk — Chatbots, personalization. 

    For limited risk processes, companies need to ensure that they’re being completely transparent with their customers about what AI is being used for and the data involved.  

  • Minimal Risk — For any processes that companies use that fall into the “minimal risk” bucket, the draft EU AI Act encourages providers to have a code of conduct in place that ensures AI is being used ethically. 

Pyramid graphic showing the levels of permissable AI risk areas defined by the EU AI Act and what the act requires organizations to do to address these areas of risk. Starting from the lowest level: Minimal risk areas require a code of conduct; limited risk areas need transparency; high risk areas need conformity assessments; and at the top level are areas that are considered unacceptable.


Conformity assessments 

Of these risk levels, high-risk systems will pose the highest compliance burden on organizations, as they’ll have to continue to meet obligations for conformity assessments. Conformity assessments (CA) require companies to ensure that their “high-risk” systems meet the following: 

  • The quality of data sets used to train, validate and test the AI systems; the data sets must be “relevant, representative, free of errors and complete.”
  • Detailed technical documentation.

  • Record-keeping in the form of automatic recording of events.

  • Transparency and the provision of information to users.

  • Human oversight.

This assessment is mandatory before a high-risk AI system is made available or used in the EU market. It ensures that AI systems comply with EU standards, particularly if there are significant modifications or changes in intended use. The main responsible party for CA is the “provider” –the entity putting the system on the market. However, under certain circumstances, the responsibility can shift to the manufacturer, distributor, or importer, especially when they modify the system or its purpose. 

Who performs a CA? 

The CA can be done internally or by an external “notified body.” Internal CAs are common as providers are expected to have the necessary expertise. Notified bodies come into play particularly when an AI system is used for sensitive applications like real-time biometric identification and does not adhere to pre-defined standards. 

During an internal CA, the provider checks compliance with quality management standards, assesses technical documentation, and ensures the AI system's design and monitoring are consistent with requirements. Success results in an EU declaration of conformity and a CE marking, signaling compliance, which must be kept for ten years and provided to national authorities if requested. 

For third-party CAs, notified bodies review the system and its documentation. If compliant, they issue a certificate; otherwise, they require the provider to take corrective action. 

How often should you perform a CA? 

Conformity assessment isn't a one-off process; providers must continually monitor their AI systems post-market to ensure they remain compliant with the evolving draft EU AI Act. In cases where a notified body is involved, they will conduct regular audits to verify adherence to the quality management system. 

Engaging all players in the AI game 

The EU AI Act is not just handing out responsibilities to AI providers; it’s casting its net wider to include various actors in the AI lifecycle, from users to deployers. And its reach is not just limited to the EU; it has global ambitions, affecting entities even outside the EU, thus having implications that are worldwide. 

Fines: A significant deterrent 

With the European Commission defining law enforcement penalties for the EU AI Act, the fines for non-compliance now stand at a maximum of 30 million euros or 6% of global turnover. For context, these fines are 50% greater than that of the GDPR, which has maximum fines of $20M or 4% of global turnover, underlining the EU’s commitment to ensuring strict adherence to the EU AI Act. 

Charting the course towards regulated AI 

The EU AI Act is a bold statement by the EU, meticulously balancing the act of fostering AI innovation while ensuring that the core values and rights of society are not compromised. With the Act inching closer to its final stages of approval, it’s crucial for everyone in the AI space to keep an eye on its development.  

Whether you’re a provider, user, or someone involved in the deployment of AI, preparing for a future where AI is not just a technological marvel but also a subject of defined legal boundaries and responsibilities is imperative. This introduction offers a glimpse into the EU AI Act’s journey and potential impact, setting the stage for the deeper analysis that unfolds in the subsequent sections. So, buckle up and let’s dive deeper into understanding the nuances and implications of the EU AI Act together. 

AI frameworks: A global perspective 

A landscape in flux: The global heat map of AI frameworks 

The global AI framework landscape underscores the imperative need for more cohesive international rules and standards pertaining to AI. The proliferation of AI frameworks is undeniable, calling for enhanced international collaboration to at least align on crucial aspects, such as arriving at a universally accepted definition of AI. 

Global map showing the different AI regulations and proposals from various major countries.


Within the tapestry of the European Union’s legal framework, the EU AI Act is a significant thread, weaving its way towards completion. Concurrently, there’s a mosaic of initiatives at the member-state level, with authoritative bodies across various nations rolling out non-binding guidelines, toolkits, and resources aimed at providing direction for the effective use and deployment of AI. 

Efficient future processes through AI

AI promises quicker, more efficient, and accurate processes in various sectors. For example, in insurance, AI has streamlined the assessment process for car accidents, optimizing a process that was once manual and lengthy. This example serves as a testament to AI's potential to significantly improve various aspects of business and everyday life. 

But engaging with AI is a nuanced dance, a careful balancing act between leveraging its unparalleled potential and navigating the associated risks. With its transformative and disruptive capabilities, AI invites cautious and informed engagement. 

Recognizing its transformative power while preparing for the challenges it brings to the table is essential for individuals and organizations alike as they navigate the dynamic landscape of artificial intelligence in the modern age. 

Weighing AI’s pros and cons in business 

Risks: Transparency, accuracy, and bias 

Despite its myriad advantages, AI isn’t without substantial challenges and risks. For starters, some AI systems, which may be perceived as “black boxes,” have been the subject of intense scrutiny and debate over transparency issues. This concern is particularly salient with larger AI systems, such as extensive language models, where there’s a lack of clarity on the training data employed. This raises significant copyright and privacy concerns, which need to be addressed head-on.  

Furthermore, the struggle with ensuring the accuracy of AI systems persists, with several instances of erroneous AI responses and predictions documented. Notably, bias that may arise in AI systems—stemming from the prejudiced data they may be trained on—poses a risk of discrimination, requiring vigilant monitoring and rectification efforts from stakeholders involved. 

AI as solution: Turning risks into opportunities 

Interestingly, AI isn’t just a challenge; it is also a potential solution to these conundrums. For instance, AI can be leveraged to identify and mitigate biases within datasets. Once these biases are discerned, strategic steps can be taken to rectify them, ensuring that AI can be harnessed optimally to maximize its benefits while minimizing associated risks. 

Developing AI governance: The way forward 

Laying the foundations for AI governance 

With the dynamic and complex AI landscape unfolding rapidly, there is an urgent need for legal and privacy professionals to lay the groundwork for robust AI governance and compliance programs. A wealth of existing guidance provides a preliminary roadmap for the essential requirements of such programs, with senior management's endorsement being a pivotal first step in this endeavor.  

Engaging C-suite executives and ensuring they comprehend the magnitude and intricacies of AI's influence is crucial for fostering a culture of AI responsibility throughout the organization. This initiative transcends mere compliance, extending to building trust in AI applications – a cornerstone for successful business operations. 

Practical steps towards an AI governance framework 

On the material front, organizations can use practical guidelines for ethical AI use. These guidelines are aligned with the AI principles from the Organization for Economic Cooperation and Development (OECD): 


  1. Transparency: Efforts should be directed towards demystifying AI applications, making their operations and decisions understandable and explainable to users and stakeholders. 

  2. Privacy Adherence: AI applications should respect and protect users’ privacy, handling personal data judiciously and in compliance with relevant privacy laws and regulations. 

  3. Human Control: Especially in high-risk areas, there should be mechanisms for human oversight and control over AI applications, ensuring they align with human values and expectations.

  4. Fair Application: Strategies for detecting and mitigating biases in AI applications should be implemented, promoting fairness and avoiding discrimination.

  5. Accountability: There should be comprehensive documentation and recording of AI operations, allowing for scrutiny, accountability, and necessary corrections. 


AI ethics policy: A critical element 

The establishment of AI Ethics Policies, informed by ethical impact assessments, is essential in navigating challenges and making informed, ethical decisions regarding AI use. For example, instead of outright blocking certain AI applications, ethical impact assessments can guide organizations in implementing nuanced, responsible use policies, especially for sensitive data. Ethical considerations should inform every step of AI application, from inception and development to deployment and monitoring. 

Inclusive AI governance: A size-agnostic imperative 

Importantly, AI governance is not an exclusive domain of large corporations with extensive resources. With AI use cases proliferating across various sectors, companies of all sizes will inevitably engage with AI, necessitating AI governance frameworks tailored to their specific needs and capacities. 

A few universal principles apply regardless of the company’s size. First, securing executive buy-in and adopting a multidisciplinary approach is imperative for successful AI governance implementation.  

Second, organizations should commence with high-level principles as a starting point, even if they are small or merely purchasing ready-made AI models. Training and upskilling employees across various functions, including procurement and technology, is also vital to understand and mitigate the risks associated with AI tools and applications. 

Embedding core governance principles 

Six core governance principles need to be embedded into AI governance programs: 

  1. Governance and Accountability: Establishing a structure for accountability, possibly through AI oversight committees or ethics review boards, is essential. Governance should be enforced throughout AI’s lifecycle, from inception to operation.

  2. Human Oversight: Adopting a human-centric approach, with trained human reviewers at various stages, is crucial for ethical AI application. 

  3. Fairness and Ethics Alignment: AI outputs should align with fairness and ethical standards, reflecting an organization’s culture and values. 

  4. Data Management: Implementing robust data management processes, tracking modifications to datasets and mapping data sources, is key for reliable AI systems. 

  5. Transparency Enhancement: Ensuring that AI decision-making processes are transparent and understandable is necessary for building trust and compliance. 

  6. Privacy and Cybersecurity: Addressing legal data processing requirements, conducting privacy impact assessments, and mitigating AI-specific cyber risks are imperative for secure and compliant AI applications. 

Given the pace at which AI is evolving and its profound implications, organizations must proactively develop and implement AI governance programs. By adopting a set of core governance principles and practices, organizations can navigate the AI landscape responsibly, ethically, and effectively. These principles, informed by ethical considerations, legal compliance, and a commitment to transparency and accountability, will guide organizations in harnessing AI’s benefits while mitigating its risks, ultimately fostering trust and success in the AI-driven future. 

Value-driven AI governance 

As organizations delve deeper into the realm of AI, developing and implementing AI governance programs aligned with their values is paramount. These governance frameworks should not only ensure compliance with legal standards but also reflect the ethical commitments and values of the organizations.  

Whether it's about making tough trade-offs between transparency and security or deciding on the ethical use of data, a values-driven approach to AI governance provides a reliable compass guiding organizations through the intricate landscape of AI applications and ethics. 

Final thoughts and tips on AI governance 

AI, GDPR, and data privacy 

When considering the interaction between AI, the draft of the EU AI Act, and GDPR, it’s crucial to acknowledge existing guidance on utilizing AI in line with GDPR. Noteworthy resources include the toolkit provided by the UK’s Information Commissioner's Office (ICO) and the comprehensive guidance and self-assessment guide offered by France's CNIL. These tools offer valuable controls and checklists, assisting organizations in ensuring compliance of their AI use with GDPR requirements. 

A starting point for aligning data usage within AI frameworks with GDPR principles is to conduct diligent Data Protection Impact Assessments (DPIAs) to ensure that all these processes remain compliant.  

AI governance start point: Privacy professionals are well-positioned to serve as orchestrators, bringing together various functions and skill sets within organizations to address AI governance comprehensively. This collaborative approach not only ensures compliance but also functions as a business enabler, fostering a proactive and informed approach to emerging challenges and opportunities in the AI landscape. 

Keep calm and AI: Embrace technological developments with a sense of calm and curiosity. Engaging with the fast-paced and continually evolving field of AI requires a willingness to learn and adapt, acknowledging that understanding and addressing the risks and potentials of AI is a journey rather than a destination. 

Evolution of Professional Roles: With the continuous changes in technology and data processing, the roles of data protection officers are evolving, potentially transitioning towards “data trust officers”. It’s imperative for professionals in the field to be open to assuming new roles and responsibilities as the technology and regulatory landscape transforms. 


To give your organization a 5-step plan to get started: 

  1. Engage with AI governance programs immediately; proactive engagement is crucial.

  2. Secure management buy-in since AI governance requires a multi-stakeholder, enterprise-wide approach.

  3. Assemble a diverse and skilled team, encompassing legal, compliance, data science, HR, information security, and possibly external experts .

  4. Prioritize, set realistic and achievable goals, and consider adopting a phased approach to AI governance. 

  5. Stay abreast of AI developments, actively engage with industry peers, and participate in AI governance initiatives to foster a collaborative and informed community. 

With the evolving landscape of AI, organizations must proactively engage with AI governance. A collaborative, multi-stakeholder approach is necessary to address the complex challenges and opportunities presented by AI.  


To learn more about how AI Governance can help your organization, request a demo today.

You may also like


AI Governance

Governing the AI lifecycle

In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase. 

June 04, 2024

Learn more


AI Governance

Embedding trust by design across the AI lifecycle

In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.

May 07, 2024

Learn more


AI Governance

Data privacy in the age of AI

In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.

April 17, 2024

Learn more


AI Governance

AI regulation in the US

In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states. 

March 05, 2024

Learn more


AI Governance

The EU AI Act

In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.

February 06, 2024

Learn more


AI Governance

Getting started with AI Governance

In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.

January 16, 2024

Learn more


AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more


Responsible AI

Preparing for the EU AI Act

Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.

December 14, 2023

Learn more


Consent & Preferences

Marketing Panel: Balance privacy and personalization with first-party data strategies

Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.

December 04, 2023

Learn more


AI Governance

Revisiting IAPP DPC: Top trends from IAPP's privacy conference in Brussels

Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC. 

November 28, 2023

Learn more


Responsible AI

Conformity assessments under the proposed EU AI Act: A step-by-step guide

Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.

November 17, 2023

Learn more


Responsible AI

OneTrust AI Governance: Championing responsible AI adoption begins here

Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.

November 14, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more


AI Governance

The Road to AI Governance: How to get started

AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.

October 06, 2023

Learn more

White Paper

AI Governance

How to develop an AI governance program

Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.

October 06, 2023

Learn more


Responsible AI

AI, Chatbots, and beyond: Your questions answered

We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.

August 08, 2023

Learn more


Responsible AI

Unpacking the EU AI Act and its impact on the UK

Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more


Responsible AI

AI, chatbots and beyond: Combating the data privacy risks

Prepare for AI data privacy and security risks with our expert webinar. We will delve into the evolving technology and how to ensure ethical use and regulatory compliance.

June 27, 2023

Learn more


AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more

Regulation Book

AI Governance

AI Governance: A consolidated reference

Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.

Learn more


AI Governance

AI governance masterclass

Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.

Learn more