Embracing responsible AI: 3 steps to get your organization started

With the latest statement from the White House on responsible AI, it’s clear AI is firmly in the spotlight. Find out how your organization can establish a foundation to address AI risks

Alexis Kateifides
Program Director, OneTrust Center of Excellence
May 16, 2023

Photo of a curved office building where the viewer is looking straight up along its façade

On May 4, 2023, the Biden-Harris Administration announced several actions, all focused on the promotion of responsible AI innovation, and the protection of Americans' rights and safety. The announcement is against the backdrop of other recent efforts, including the White House’s Blueprint for an AI Bill of Rights, and NIST’s AI Risk Management Framework (NIST AI RMF), and follows a meeting between Vice President Harries and CEOs of various organizations on the issues.

The new actions include:

1. $140 million of new funding to invest in the launch of seven new National AI Research Institutes to advance AI R&D

2. A commitment from AI companies to publicly assess existing generative AI systems and understand how they align with the AI Bill of Rights and the AI Risk Management Framework

3. New policy guidance from the Office of Management and Budget on the use of AI systems by the government


Fear around the usage of this technology is at an all-time high and it’s the responsibility of the people who choose to leverage these technologies to make the choices that lead to trust.
Andrew Clearwater, Chief Trust Architect, OneTrust
Andrew Clearwater


With AI usage skyrocketing, questions continue to be raised around ethical and responsible practices. Several comprehensive AI laws and regulations are working their way through legislative processes, like the EU’s draft AI Act, and a host of standards, frameworks and guidance has been issued, including NIST’s AI RMF, ISO’s 23894 Guidance on AI Risk Management, the OECD’s AI Principles, and others. 

But as the Biden-Harris Administration noted, it’s not just about law – “when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.”

Below are some practical steps you can take to establish a foundation to minimize risks related to AI and machine learning (ML) technologies and promote the responsible use of AI.


3 steps to implement responsible AI


1. Establish your team

Given the nature of AI, privacy professionals find themselves in a unique position to lead the charge on responsible AI throughout their organization, balancing the innovation of AI with user privacy. However, the development and use of AI requires the coming together of a variety of roles to understand, mitigate, and manage the various risks that may arise. 

NIST’s AI RMF highlights the benefit of treating AI risks along with other critical risks – such an approach “will yield a more integrated outcome and organizational efficiencies”. In addition, the governance structure that will emerge by bridging the different stakeholders within your company will ensure that a systematic approach is taken to decisions concerning AI. 


2. Develop an AI inventory

Make a list of all products, features, processes, and projects related to AI and ML, whether they're in-house or from external sources. From a privacy or IT risk management program perspective, you can build on your existing data maps or inventories. Otherwise, start by carrying out a data mapping exercise that involves processing personal information, in addition to AI and ML tech. Data discovery can also help when trying to determine how your AI systems are going to interact with different categories of data.


3. Map your efforts against a framework

The NIST AI RMF aims to provide organizations that develop AI systems and technologies with a practical and adaptable framework for measuring and protecting against potential harm to individuals and society. Mapping your efforts against a framework may help in understanding how to expand or create new governance structures. For example, AI risk questions can be embedded into existing assessments and user workflows, which may be in the form of privacy impact assessments (PIAs) or vendor assessments. Policies, processes, and training can also be updated to include your organization’s approach to AI where necessary.


Championing responsible AI together

AI and ML have been one of the most talked about issues of this year already. The speed at which the technology, its risks, and the policy around them are developing makes this a unique challenge to organizations and professionals. By building a dedicated, cross-functional team, developing your AI, and embracing a framework to structure efforts around, you can minimize risks linked to AI usage and foster responsible AI practices. And the ethical and moral imperative of responsible AI calls for a collective effort from organizations, developers, and policymakers to ensure that AI innovation remains synonymous with the protection of rights, safety, and trust for everyone. 

To learn more about how your organization can get started with AI governance, download the whitepaper, “Navigating responsible AI: a privacy professional’s guide”.  

You may also like


Responsible AI

Unpacking the EU AI Act

Prepare your business for EU AI Act and other AI regulations with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more


AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more