The Artificial Intelligence Landscape Blog

Artificial intelligence (AI) is quickly becoming one of the most critical areas of focus for both businesses and regulatory authorities. We integrate and interact with AI on a day-to-day basis through several mediums. Facial recognition for online banking and identity checks, algorithmic pricing when using mobile apps like Uber and Lyft, and intelligent online advertising tailored to your needs and interests based on models of your activity are just a few examples. AI has become synonymous with our everyday lives. 

Over recent months, there has been significant growth in AI-related guidance, white papers, regulations, and frameworks. In some ways, the AI discussion can be said to be reaching a new stage in maturity. In this blog post, we will talk through some of these changes.  

Interested in learning more about AI? Watch this webinar.  

What is the existing AI regulatory landscape? 

The current regulatory landscape for AI consists of the GDPR which has data protection laws that govern the use of AI. Additionally, there is local guidance. There are several existing laws that become relevant on case-by-case basis based on the context in which AI is being used. These laws may include, anti-discrimination laws, intellectual property rights, and sector-specific rules. 

What is the new framework of AI regulations? 

The emerging framework of AI regulation consists of several new pieces of legislation and guidelines. This includes: the EU White Paper which makes new proposals about AI regulation, new sets of guidelines such as the EU AI Working Group Ethical Guidelines on AI and the most imminent law we may see in the new AI framework is the EU Digital Service Act which has proposals for specific rules on transparency. 

Curious about artificial intelligence? Check out this webinar. 

What are the key risks of AI? 

There are five key regulatory risk areas that can be identified regarding AI 

What can help mitigate AI risk? 

It is important to think about how you are to mitigate the above-mentioned risks in practice. There are a variety of methods that can be used but at a high level that align with guidance that has been provided by regulatory bodies. This includes risk assessments, testing/monitoring, vendor management, human review, and new policies and procedures that can all help mitigate potential risk when dealing with AI.  

For more information on how you can leverage the power of AI for your privacy, security, and trust program, check out OneTrust Athena