TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Trust intelligence vendor launches AI Governance solution Related reading: AI: The transatlantic race to regulate

rss_feed

""

As uses for artificial intelligence and machine learning algorithms expand into a variety of core functions of commerce, digital solutions allowing organizations' leaders to govern the use of their AI systems are becoming a necessity.

Recently, trust intelligence vendor OneTrust entered the AI system management space with the launch of its AI Governance solution. It builds on OneTrust's existing trust-management solution and allows customers to utilize AI for functions, such as data inventory work and risk assessment of their own proprietary AI systems. Currently, it is available as an early access program for select customers with expanded access coming in the fall and becoming generally available on its trust-management platform in 2024. 

"AI is a really amazing new technology tool that we have in our toolkits to take data to a new level of productivity, efficiency or value on top of how we can bring products and services and innovation to market for our customers," said OneTrust Chief Product and Strategy Officer Blake Brannon. "In essence, AI is just another use of the same underlying data, and when you think about the governing of your AI usage, it is still kind of a data governance problem."

As the EU Artificial Intelligence Act trilogue negotiations continue, Brannon said data protection laws, such as the EU General Data Protection Regulation, act somewhat as stand-ins for AI regulations in the current absence of jurisdictional laws for AI technologies. Whatever the use case of a given organization's AI system is, he said, it will still need to comply with laws governing the processing of personal data. He said the AI Governance solution flags the types of algorithmic data processing that would be considered "prohibited uses" under the AI Act once passed.

Additionally, AI Governance, Brannon said, pulls in a variety of AI governance frameworks, such as the U.S. National Institute of Standards and Technology's AI Risk Management Framework and the U.K. Information Commissioner's Office AI toolkit. He said the new OneTrust solution is built to be flexible in terms of the different types of AI systems used by organizations, whether they are commercial or proprietary, the relevancy of certain frameworks and forthcoming regulations.

"The same fundamental (rules and regulations) that govern privacy will also govern AI," Brannon said. "The product is built in a very flexible way knowing that there's going to be different data sets from different sources, different frameworks and regulations around the world, that are all going to need to govern it."

One of the big picture challenges the AI Governance Solution attempts to solve, Brannon said, is creating a "central inventory" of all the machine learning systems a given organization utilizes. He said another consideration involves customers using the AI Governance platform to synchronize models, so teams can conduct due-diligence without the training data resulting in drift, bias or diminished accuracy.

"The first part of our solution is we integrate with machine learning operations tools, so they stay in sync when a when an ML engineer creates up a new project or adds a new data set to be trained for an existing model being used; we're able to detect that." Brannon said. "We're able to then proactively start to engage with those individuals in your company to start assessing risk."

Despite a general sense of uncertainty among global business leaders about the rise of AI in numerous aspects of commerce, there is a consensus on the need for human operators to maintain control over AI systems. According to a Workday poll cited in Forbes, 93%  of executives subscribe to this idea. Brannon said just the act of conducting responsible AI governance alone is a manually intensive process in and of itself.

Brannon used the example of society at large being generally receptive to a machine-learning algorithm trained on large health data sets that could make a predictive diagnosis on a patient's likeliness to develop cancer. In this instance, he said a human operator would ensure no personally identifiable health data is used by the algorithm because AI governance requires the operator to evaluate whether a given model complies with data protection laws, as well as whatever future AI-specific regulations may enter into force.

"Most people would be on board with that type of (medical) technology but using that same health data to predict the type of food you want to eat, so a company could target advertisements to a person, is something that would require consent," Brannon said. "AI governance is contextual and requires human judgment. So, it's a very critical part of the process."


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.