Artificial intelligence (AI) is quickly becoming one of the most critical areas of focus for both businesses and regulatory authorities. We integrate and interact with AI on a day-to-day basis through several mediums. Facial recognition for online banking and identity checks, algorithmic pricing when using mobile apps like Uber and Lyft, and intelligent online advertising tailored to your needs and interests based on models of your activity are just a few examples. AI has become synonymous with our everyday lives. 

Over recent months, there has been significant growth in AI-related guidance, white papers, regulations, and frameworks. In some ways, the AI discussion can be said to be reaching a new stage in maturity. In this blog post, we will talk through some of these changes.  

Interested in learning more about AI? Watch this webinar.  

What is the existing AI regulatory landscape? 

The current regulatory landscape for AI consists of the GDPR which has data protection laws that govern the use of AI. Additionally, there is local guidance. There are several existing laws that become relevant on a case-by-case basis based on the context in which AI is being used. These laws may include, anti-discrimination laws, intellectual property rights, and sector-specific rules. 

What is the new framework of AI regulations? 

The emerging framework of AI regulation consists of several new pieces of legislation and guidelines. This includes: the EU White Paper which makes new proposals about AI regulation, new sets of guidelines such as the EU AI Working Group Ethical Guidelines on AI and the most imminent law we may see in the new AI framework is the EU Digital Service Act which has proposals for specific rules on transparency. 

Curious about artificial intelligence? Check out this webinar. 

What are the key risks of AI? 

There are five key regulatory risk areas that can be identified regarding AI.  

  • Bias. Bias is a relevant risk regarding AI because it is likely to arise through inherent prejudices that may be present within the training data.  
  • Data Usage. Data usage regarding AI considers a wide variety of different risks. From a privacy perspective, this includes how personal data is used in practice within AI models.  
  • Statistical Accuracy. Statistical accuracy is also a relevant regulatory risk because it regards the accuracy of personal data and the performance of a model. How well does a model perform in practice? How often does it produce the right results? This becomes extremely important if a model has a high-risk input or implication for an individual because of a determination made through AI.  
  • Opacity. Having a handle on transparency in relation to AI applications is another risk. There should be a degree of information provided about the data and what the AI means in practice. 
  • Oversight. When talking about oversight as a regulatory risk in AI we are talking about accountability and how can you, as an organization, demonstrate that all the other regulatory risks have been mitigated in practice. 

What can help mitigate AI risk? 

It is important to think about how you are to mitigate the above-mentioned risks in practice. There are a variety of methods that can be used but at a high level that align with guidance that has been provided by regulatory bodies. This includes risk assessments, testing/monitoring, vendor management, human review, and new policies and procedures that can all help mitigate potential risk when dealing with AI.  

  • Risk Assessment: The risk assessment from a privacy perspective comes in the form of a Data Protection Impact Assessment which will likely be mandatory if the AI model is making use of personal data. Risk assessments will probably need to be enhanced to cover the wider impact of AI models and issues such as bias and statistical accuracy to consider what other risks may be relevant and how those can be addressed.  
  • Testing/Monitoring:  Testing and monitoring are extremely important to detect issues that may exist prior to going live.  
  • Vendor Management: For many organizations, Artificial Intelligence is not something that is being developed in-house. More likely than not their organizations are making use of third-party vendors and it is important that the appropriate contractual measures are put in place.  

For more information on how you can leverage the power of AI for your privacy, security, and trust program, check out OneTrust Athena