Skip to main content

On-demand webinar coming soon...

Blog

Establishing an AI governance committee: An inside look at OneTrust’s process

Knowing where to start an AI governance committee can be overwhelming. See how OneTrust set up our committee, and learn how you can get started in your organization

Adomas Siudika
OneTrust AI Governance Committee, Privacy Counsel, FIP
November 30, 2023

A small team discuss a project during a meeting in an open office environment.

To effectively govern AI and mitigate the risks to different populations, organizations must establish diverse AI governance committees to establish policies, define risk levels and organizational risk posture, evaluate use cases, and ensure human involvement for high-risk processes. 

Though most organizations can agree that having an AI governance committee is crucial to the use of responsible AI, it can be overwhelming to know where to start. To give an example, we’ll use this blog to outline how OneTrust established its AI governance committee, along with considerations for establishing a committee in your business.

 

Key questions for establishing an AI governance committee

We’re at this key point of AI evolution where the future of AI highly depends on whether the public will trust AI systems and companies that use them. OneTrust is fully committed to the adoption and responsible use of human-centric AI systems that adhere to our core company values, ethical principles, and put people first. 

Gradual integration of AI systems throughout our business ecosystems and widespread adoption of AI systems will fundamentally change the way we operate as a business. OneTrust decided early on to establish a dedicated internal AI governance committee to oversee our efforts of building a robust AI governance program. The goal of this committee is to ensure our current and future use of AI systems conforms with OneTrust responsible AI principles, regulatory standards, and best industry practices. 

 

Involvement 

The first step to forming your committee is determining who in your organization will be involved. 

Here are key questions to consider for the involvement stage:  

  • Who is involved?

  • How did you determine participants?

OneTrust’s AI governance committee includes representatives from the key functional areas of the organization, including Legal, Ethics & Compliance, Privacy, Information Security & Architecture, Research & Development, and Product Engineering & Management. Members of the committee have diverse skillsets, experiences, and backgrounds because we believe that cross-functional knowledge sharing is key to an effective AI governance program. 

Tackling AI governance challenges requires engagement of individuals who come from a variety of specialized backgrounds. Responding to the new challenges posed by modern innovation often requires creative solutions that can be delivered when individuals representing different areas of expertise come together and bring their unique perspectives to the table. 

Making sure you have a diverse committee will help you come up with the creative solutions and thoughtful response that an AI governance program requires. 

 

Governance 

Once your committee is formed, it’s time for it to govern your program. A lot falls into this category, but some key questions for the governance stage are: 

  • How does your organization define AI systems? 

  • How do you define risk levels? 

  • How do you ensure human oversight for high-risk systems? 

  • What is your organization’s stance on generative AI systems like ChatGPT? 

Defining AI is an important building block of AI governance programs. We see the tech and business communities, academics, and legal scholars all coming up with different definitions for digital brains. Even AI may be utilized to define itself; when asked for the definition of AI, ChatGPT says: “AI is the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision making, and natural language processing.” It’s well articulated and highlights the essence of what AI stands for.

At OneTrust, we consulted existing AI regulatory frameworks and decided to use the definition of AI outlined in the EU AI Act. We consider new AI standards rolled out in the EU as the most advanced set of AI governance standards that shape the direction of AI policy globally.

The definition of AI systems in the EU AI Act refers to a software-based application that’s developed with one or more of the AI-embedded techniques or approaches like machine learning, statistical, logical, and/or knowledge-based approaches, and Bayesian estimation and search and optimization methods. 

This definition also specifies that an AI system can generate outputs such as content, predictions, recommendations, or decisions influencing the environments that humans interact with.

 

How does OneTrust define AI risk levels?

In a similar vein, our internal AI governance program also adopted the AI risk classification system outlined in the EU AI Act. Following the guidelines set forth in the AI Act, we put AI systems into four risk categories:

 

Pyramid graphic showing the levels of permissable AI risk areas defined by the EU AI Act and what the act requires organizations to do to address these areas of risk. Starting from the lowest level: Minimal risk areas require a code of conduct; limited risk areas need transparency; high risk areas need conformity assessments; and at the top level are areas that are considered unacceptable.
 
  1. Unacceptable AI system: AI systems that are classified as too risky for consumption; e.g., social scoring of individuals based on their monitoring over time and that may lead to detrimental or unfavorable treatment of individuals. These systems are prohibited by the AI Act. 

  2. High risk AI systems: AI systems that pose a high probability of risk of harm to the health and safety, or a risk of adverse impact on fundamental rights of individuals; e.g., recruitment or selection of candidates for employment (including for advertising, screening, or filtering applications, evaluating candidates during interviews or tests), making decisions on promotion and termination of employment, task allocation applications, and monitoring and evaluating performance and behavior of employees.  

  3. Low or minimal risk AI systems: AI systems that don’t pose known risks of harm to the health and safety or risk of fundamental rights to individuals. Examples of such systems include spam filters and inventory management systems. 

  4. General purpose AI system: AI systems that use generative AI (GenAI) technology to create original content. Examples of such systems include technology that summarizes long form content, autonomously creates software code, and generates digital images from natural language. 

 

How does OneTrust ensure there is human review for high-risk processes? 

OneTrust’s AI Use Policy (which will be rolled out shortly) doesn’t allow prohibited AI systems; that same policy sets the processes for assessing the use of all other risk categories. We leverage OneTrust Third-Party Risk Management (TPRM) tools and developed AI-risk extensions to our existing risk assessment templates. Using this process, we’re able to assess AI-linked risks, which in some instances are connected to other standards, like privacy, information security, and ethics risks domains. 

While the TPRM process is highly automated, there’s always a human involved in reviewing assessments and following up in case there is an issue. We developed and are now testing internally modified versions of Privacy Impact Assessments (PIAs) that include questions about known AI-risks when assessing AI systems and our AI service providers. 

These pre-built templates are an effective tool for identification of some of the new AI-linked compliance challenges, like the explainability of an AI system’s processing algorithm or adequacy in disclosures of personal information processed by AI systems. 

AI systems that use higher risk data, like HR systems that usually include more personal information, must be vetted through the assessment process. This ensures that we gain the right level of visibility into how these systems are operated, what data is used, and whether the system provider followed the regulatory requirements and industry best practices when developing the system. 

 

What’s OneTrust’s policy and stance toward generative AI tools like ChatGPT? 

Our general policy is to support the use of AI systems, including generative AI tools, as long as they’re thoroughly vetted, and reasonable guardrails are put in place to manage the known risks. 

Using our third-party risk assessments process, we’re able to scan for any risks and approve the use of AI-tools that are aligned with our internal AI Use Policy, including our Responsible AI principles. Rather than banning the use of generative AI, we’ll implement the same vetting protocols as we do for any other category of AI applications. 

Risk assessments for AI systems will cover the whole spectrum of associated risks, including privacy and information security architecture. Based on the results of these assessments, we can make the decision on whether or not to allow the use of that AI application. 

We recognize that we might not be able to completely eliminate the identified AI risks in every case – instead, we’ll turn our attention to how we can mitigate known risk and share best practice approaches for users of those systems. 

For example, in OneTrust’s forthcoming AI Use Policy, we warn the users to be aware that content produced by GenAI is not entirely reliable and may not be accurate and that general purpose AI systems may mistakenly produce outcomes that may be inappropriate. We further alert users they should use caution and discretion before sharing, publishing, or otherwise using outcomes produced by GenAI systems. 

Finally, we advise users that data produced by AI systems under no circumstances shall be used as a substitute for legal, financial, or any other professional advice. We are looking into educating the users of AI systems through AI risk awareness training that is part of the overall AI risk mitigation controls we will roll out to our workforce by the end of this year. 

 

Cadence & structure 

The work of your AI governance committee will be ongoing, but it is helpful to have a set cadence for regular meetings. As you’re setting up your processes, consider these key questions:

  • How often will the AI governance committee meet?

  • How will the meetings be structured? 

 

How often does the AI Governance committee meet? 

Currently, OneTrust’s AI Governance Committee is set to meet once quarterly. This cadence may be adjusted if we decide that there is a business necessity for more frequent meetings. That said, a full committee meeting is not the only way the AI Governance committee conducts its business at OneTrust. 

If the Committee must make a decision on some initiative or policy, such voting is facilitated by electronic means where each committee member can vote. At the current stage, most of the AI Governance work is conducted in the smaller groups, e.g., by Information Security, Compliance, or Privacy teams. Ad hoc meetings in smaller groups play an important role in making sure that we make progress in governing our AI program.

  

How are meetings structured?

The Committee’s meetings are intended to focus on discussions and decision making around the key areas of responsibility, which include reviewing and approving AI-linked projects and initiatives, developing AI governance policies and procedures, and monitoring that the use of AI aligns with OneTrust Responsible AI principles and values.

 

Getting started with AI governance

Although standing up an AI governance program can seem overwhelming at the start, taking it one step at a time and making sure you have the right team in place goes a long way. To learn how OneTrust can support you in your AI governance journey, request a demo today. 


You may also like

Webinar

Privacy Management

Scaling to new heights with AI Governance

Join OneTrust experts to learn about how to enforce responsible use policies and practice “shift-left” AI governance to reduce time-to-market.

June 25, 2024

Learn more

Webinar

AI Governance

Governing data for AI

In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.

June 04, 2024

Learn more

Webinar

AI Governance

Embedding trust by design across the AI lifecycle

In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.

May 07, 2024

Learn more

Webinar

AI Governance

Navigating AI policy in the US: Insights on the OMB Announcement

This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.

April 18, 2024

Learn more

Webinar

AI Governance

Data privacy in the age of AI

In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.

April 17, 2024

Learn more

Resource Kit

AI Governance

OneTrust's journey to AI governance resource toolkit

What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.

April 11, 2024

Learn more

Interactive Tool

Privacy Management

OneTrust Data Privacy Maturity Model self-assessment

This self-assessment will help you to gauge the maturity of your privacy program and understand the areas the areas of improvement that can further mature your privacy operations.

April 01, 2024

Learn more

Webinar

AI Governance

AI in (re)insurance: Balancing innovation and legal challenges

Learn the challenges AI technology poses for the (re)insurance industry and gain insights on balancing regulatory compliance with innovation.

March 14, 2024

Learn more

Webinar

Privacy Management

Fintech, data protection, AI and risk management

Watch this session for insights and strategies on buiding a strong data protection program that empowers innovation and strengthens consumer trust.

March 13, 2024

Learn more

Webinar

Privacy Management

Managing cybersecurity in financial services

Get the latest insights from global leaders in cybersecurity managment in this webinar from our Data Protection in Financial Services Week 2024 series.

March 12, 2024

Learn more

Webinar

AI Governance

Government keynote: The state of AI in financial services

Join the first session for our Data Protection in Financial Services Week 2024 series where we discuss the current state of AI regulations in the EU.

March 11, 2024

Learn more

White Paper

AI Governance

Getting started with AI governance: Practical steps and strategies

Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.

March 08, 2024

Learn more

Webinar

AI Governance

Revisiting IAPP DPI Conference – Key global trends and their impact on the UK

Join OneTrust and PA Consulting as they discuss key global trends and their impact on the UK, reflecting on the topics from IAPP DPI London.

March 06, 2024

Learn more

Webinar

AI Governance

AI regulations in North America

In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states. 

March 05, 2024

Learn more

In-Person Event

Responsible AI

Data Dialogues: Implementing Responsible AI

Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.

February 23, 2024

Learn more

AI Governance

Catch it Live: See the All-New Features in OneTrust's Winter Release

See the latest OneTrust platform features that improve on customers' ability to build trust, ensure compliance, and manage risk.

February 22, 2024

Learn more

Webinar

AI Governance

Global trends shaping the AI landscape: What to expect

In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.

February 13, 2024

Learn more

eBook

Privacy Management

Understanding the Data Privacy Maturity Model

Data privacy is a journey that has evolved from a regulatory compliance initiative to a customer trust imperative. This eBook provides an in-depth look at the Data Privacy Maturity Model and how the business value of a data privacy program can realised as it matures.

February 07, 2024

Learn more

Webinar

AI Governance

The EU AI Act

In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.

February 06, 2024

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act: Part 2

Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.

February 05, 2024

Learn more

Data Sheet

Privacy Automation

An overview of the Data Privacy Maturity Model

Data privacy is evolving from a regulatory compliance initiative to a customer trust imperative. This data sheet outlines the four stages of the Data Privacy Maturity Model to help you navigate this shift.

February 05, 2024

Learn more

Checklist

AI Governance

Questions to add to existing vendor assessments for AI

Managing third-party risk is a critical part of AI governance, but you don’t have to start from scratch. Use these questions to adapt your existing vendor assessments to be used for AI.

January 31, 2024

Learn more

Webinar

AI Governance

Getting started with AI Governance

In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.

January 16, 2024

Learn more

Webinar

AI Governance

First Annual Generative AI Survey: Business Rewards vs. Security Risks Panel Discussion

OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.

January 12, 2024

Learn more

Report

AI Governance

ISMG's First annual generative AI study - Business rewards vs. security risks: Research report

OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.

January 04, 2024

Learn more

Webinar

AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act

Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.

December 14, 2023

Learn more

Webinar

Consent & Preferences

Marketing Panel: Balance privacy and personalization with first-party data strategies

Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.

December 04, 2023

Learn more

Webinar

AI Governance

Revisiting IAPP DPC: Top trends from IAPP's privacy conference in Brussels

Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC. 

November 28, 2023

Learn more

eBook

AI Governance

Navigating the draft EU AI Act

With the use of AI proliferating at an exponential rate, the EU is in the process of rolling out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.

November 17, 2023

Learn more

eBook

Responsible AI

Conformity assessments under the proposed EU AI Act: A step-by-step guide

Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.

November 17, 2023

Learn more

Webinar

Responsible AI

OneTrust AI Governance: Championing responsible AI adoption begins here

Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.

November 14, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more

Infographic

AI Governance

The Road to AI Governance: How to get started

AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.

October 06, 2023

Learn more

White Paper

AI Governance

How to develop an AI governance program

Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.

October 06, 2023

Learn more

eBook

Responsible AI

AI, Chatbots, and beyond: Your questions answered

We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.

August 08, 2023

Learn more

Webinar

Responsible AI

Unpacking the EU AI Act and its impact on the UK

Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more

Webinar

Responsible AI

AI, chatbots and beyond: Combating the data privacy risks

Prepare for AI data privacy and security risks with our expert webinar. We will delve into the evolving technology and how to ensure ethical use and regulatory compliance.

June 27, 2023

Learn more

Webinar

AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more

Webinar

AI Governance

AI regulation in the UK – The current state of play

Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.

March 20, 2023

Learn more

Regulation Book

AI Governance

AI Governance: A consolidated reference

Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.

Learn more

Webinar

AI Governance

AI governance masterclass

Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.

Learn more

Webinar

AI Governance

Mature your data privacy program

OneTrust DataGuidance and Sidley are joined by industry experts for the annual Data Protection in Financial Services Week.

Learn more