A horizontal approach: Standing apart on the global stage
In crafting its approach to artificial intelligence legislation, the lawmakers of the European Union (EU) have opted for a horizontal legislative framework as the ongoing trilogue looks to finalize a regulation. The EU’s draft Artificial Intelligence Act (EU AI Act) provides an AI legal framework that embraces an industry-agnostic perspective, a general approach, and is meticulously designed with nearly a hundred articles.
Here, we’ll look to provide a window into the draft EU AI Act. This piece of legislation is not just the first of its kind—but also a potential benchmark for global AI regulation, developed to help create a precedent in the rapidly evolving AI landscape.
Guarding values, fueling innovation
The EU AI Act is carefully balanced. It’s not just about throwing a safety net around society, economy, fundamental rights, and the bedrock values of Europe that might be at risk due to AI systems; it’s also a nod to the power and potential of AI innovation, with built-in safeguards designed to promote and protect inventive AI strides. It looks to strike the balance of risk management and protecting critical infrastructure from potential pitfalls, while promoting the innovations that general-purpose AI can bring with it.
Crafting the EU AI Act has been anything but a walk in the park, with the definition of AI being one of the contentious corners. Since its inception proposal in April 2021, the Act has been a living document, seeing numerous iterations, each amendment reflecting the fluid discourse around AI technology and its implications for society.
AI: Breaking down the concept
Machine learning, the basis of AI systems and the building blocks of AI algorithms, are defined by the EU AI Act as “including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning”. The complexity of AI systems is a sliding scale, with more intricate systems requiring substantial computing power and input data. The output from these systems can be simple or mightily complex, varying with the sophistication of the AI in play.
This broad definition covers a range of technologies and uses of AI, from your everyday chatbots to highly sophisticated generative AI models, such as ChatGPT. But it’s important to note that not every AI system falling under the Act’s broad definition will be regulated. The Act plays it smart with a risk-based approach, bringing under its regulatory umbrella only those systems associated with specific risk levels.
AI regulation: Calibrated to risk
Here’s where it gets interesting. The EU AI Act has different baskets for AI systems. Some are seen as posing an unacceptable risk to European values, leading to their prohibition. High-risk systems, while not banned, have to dance to a tighter regulatory tune. It’s vital to remember that these risk categories aren't static; the Act is still in a draft stage, and as more changes come, these risk categories will likely be fine-tuned as well.
EU AI Act risk levels
The EU AI Act defines multiple levels of permissible risk: high risk, limited risk, and minimal risk.
These are the levels of “permissible risk” that are allowed by organizations, however, “unacceptable risk” is a risk level which is not allowed by organizations at which point companies need to change their models accordingly.