Blog

Embracing responsible AI: 3 steps to get your organization started

With the latest statement from the White House on responsible AI, it’s clear AI is firmly in the spotlight. Find out how your organization can establish a foundation to address AI risks

Alexis Kateifides
Program Director, OneTrust Center of Excellence
May 16, 2023

Photo of a curved office building where the viewer is looking straight up along its façade

On May 4, 2023, the Biden-Harris Administration announced several actions, all focused on the promotion of responsible AI innovation, and the protection of Americans' rights and safety. The announcement is against the backdrop of other recent efforts, including the White House’s Blueprint for an AI Bill of Rights, and NIST’s AI Risk Management Framework (NIST AI RMF), and follows a meeting between Vice President Harries and CEOs of various organizations on the issues.

The new actions include:

1. $140 million of new funding to invest in the launch of seven new National AI Research Institutes to advance AI R&D

2. A commitment from AI companies to publicly assess existing generative AI systems and understand how they align with the AI Bill of Rights and the AI Risk Management Framework

3. New policy guidance from the Office of Management and Budget on the use of AI systems by the government

 

Fear around the usage of this technology is at an all-time high and it’s the responsibility of the people who choose to leverage these technologies to make the choices that lead to trust.
Andrew Clearwater, Chief Trust Architect, OneTrust
Andrew Clearwater

 

With AI usage skyrocketing, questions continue to be raised around ethical and responsible practices. Several comprehensive AI laws and regulations are working their way through legislative processes, like the EU’s draft AI Act, and a host of standards, frameworks and guidance has been issued, including NIST’s AI RMF, ISO’s 23894 Guidance on AI Risk Management, the OECD’s AI Principles, and others. 

But as the Biden-Harris Administration noted, it’s not just about law – “when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”

Below are some practical steps you can take to establish a foundation to minimize risks related to AI and machine learning (ML) technologies and promote the responsible use of AI.
 

3 steps to implement responsible AI

1. Establish your team

Given the nature of AI, privacy professionals find themselves in a unique position to lead the charge on responsible AI throughout their organization, balancing the innovation of AI with user privacy. However, the development and use of AI requires the coming together of a variety of roles to understand, mitigate, and manage the various risks that may arise. 

NIST’s AI RMF highlights the benefit of treating AI risks along with other critical risks – such an approach “will yield a more integrated outcome and organizational efficiencies”. In addition, the governance structure that will emerge by bridging the different stakeholders within your company will ensure that a systematic approach is taken to decisions concerning AI. 
 

2. Develop an AI inventory

Make a list of all products, features, processes, and projects related to AI and ML, whether they're in-house or from external sources. From a privacy or IT risk management program perspective, you can build on your existing data maps or inventories. Otherwise, start by carrying out a data mapping exercise that involves processing personal information, in addition to AI and ML tech. Data discovery can also help when trying to determine how your AI systems are going to interact with different categories of data.
 

3. Map your efforts against a framework

The NIST AI RMF aims to provide organizations that develop AI systems and technologies with a practical and adaptable framework for measuring and protecting against potential harm to individuals and society. Mapping your efforts against a framework may help in understanding how to expand or create new governance structures. For example, AI risk questions can be embedded into existing assessments and user workflows, which may be in the form of privacy impact assessments (PIAs) or vendor assessments. Policies, processes, and training can also be updated to include your organization’s approach to AI where necessary.
 

Championing responsible AI together

AI and ML have been one of the most talked about issues of this year already. The speed at which the technology, its risks, and the policy around them are developing makes this a unique challenge to organizations and professionals. By building a dedicated, cross-functional team, developing your AI, and embracing a framework to structure efforts around, you can minimize risks linked to AI usage and foster responsible AI practices. And the ethical and moral imperative of responsible AI calls for a collective effort from organizations, developers, and policymakers to ensure that AI innovation remains synonymous with the protection of rights, safety, and trust for everyone. 

To learn more about how your organization can get started with AI governance, download the whitepaper, “Navigating responsible AI: a privacy professional’s guide”.  


You may also like

Webinar

AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more

eBook

AI Governance

Navigating the draft EU AI Act

With the use of AI proliferating at an exponential rate, the EU is in the process of rolling out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.

November 17, 2023

Learn more

Infographic

Responsible AI

EU AIA Conformity Assessment: A step-by-step guide

A Conformity Assessment is the process of verifying and/or demonstrating that a “high- risk AI system” complies with the requirements of the EU AI Act. Download the infographic for a step-by-step guide to perform one.

November 17, 2023

Learn more

Webinar

Responsible AI

OneTrust AI Governance: Championing responsible AI adoption begins here

Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.

November 14, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more

Webinar

The Shifting US Privacy Landscape: Lessons learned from enforcement actions and emerging trends

Stay ahead of US privacy laws as we explore the lessons learned from CCPA and FTC enforcement and how AI is effecting the regulatory landscape.

October 12, 2023

Learn more

Infographic

AI Governance

The Road to AI Governance: How to get started

AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.

October 06, 2023

Learn more

White Paper

AI Governance

How to develop an AI governance program

Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.

October 06, 2023

Learn more

eBook

Responsible AI

AI, Chatbots, and beyond: Your questions answered

We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.

August 08, 2023

Learn more

Webinar

Responsible AI

Unpacking the EU AI Act and its impact on the UK

Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more

Webinar

AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more

Regulation Book

AI Governance

AI Governance: A consolidated reference

Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.

Learn more