Skip to main content

On-demand webinar coming soon...

On-demand webinar coming soon...

On-demand webinar coming soon...

April 13, 2026

AI Runs On Data, Do You Have Permission To Use It?

OneTrust ‘Trustonomy’ graphic showing a stylized green and tan landscape with a vertical path, overlaid text reading ‘onetrust’ and ‘trustonomy.

About this episode


Consent isn’t a checkbox – and AI is forcing organizations to confront that reality fast. Eric Bowlin, a partner in digital trust and privacy at Deloitte & Touche LLP, shares how one global organization is building AI capabilities on top of consented data without breaking trust. Drawing from real deployments in highly regulated environments, Eric explains why consent is the foundation of AI-ready governance, how to distinguish acceptable AI use from riskier territory, and what leaders need to get right to move quickly and responsibly as AI use cases evolve.

Ojas Rege, host of the Trustonomy podcast

Host
Ojas Rege

Eric Bowlin's headshot, in color

Guest
Eric Bowlin





Ojas Rege is SVP and GM of Privacy and Data Governance at OneTrust, with 35 years of experience in enterprise security and applications. He advises global organizations on responsible data and AI strategy. His perspective on technology has been featured in Bloomberg, CIO Magazine, Financial Times, and Forbes. Ojas holds BS and MS degrees in Computer Engineering from MIT, an MBA from Stanford, is a Fellow of the Ponemon Institute, and holds CIPP/E and CIPM privacy certifications..

Eric Bowlin, a partner at Deloitte & Touche LLP, serves as the Life Sciences & Health Care privacy leader for Deloitte Risk & Financial Advisory. He has nearly 20 years of privacy, cyber, and business process risk management experience including program and internal control design, assessment, implementation, and auditing. He specializes in leading large, complex engagements for many of Deloitte’s largest clients and has provided services to clients in nearly every industry.

Ojas Rege:

 AI has the potential to dramatically enhance digital experiences, providing better personalization and deeper customer engagement. But all this requires the AI system to have access to personal data. The problem? Individuals own their personal data, not companies. They must consent to its use. But how do you get consent, right in an AI world? What are the risks? Do you have to ask for consent all over again?

 

Eric Bowlin, Partner in Digital Trust and Privacy at Deloitte & Touche LLP takes us inside these new rules of consent in the age of AI. He explains how companies can talk to their data responsibly, use it to innovate and keep trust at the center of everything they do. If you're trying to understand where AI creates opportunity and go to market experiences and where it might quietly introduce risk, this conversation is for you.

 

 

Eric Bowlin:

A lot of organizations now are struggling with that challenge of, "Do we need to re-consent? Do we need to provide updated privacy notices to our customers to let them know that the way that we're using this data has changed?"

 

 

Ojas Rege:

Difficult questions like these now sit at the center of AI governance and they're exactly the type of legal and ethical decisions Eric Bowlin helps his clients navigate. Eric is a partner in digital trust and privacy at Deloitte & Touche LLP. And in this age of AI, he believes that all comes down to how organizations show customers that they're acting responsibly.

 

 

Eric Bowlin:

It's really digital trust that you're trying to build, right? And that might be in how you shape and orchestrate a digital experience, but it's also the way that you actually curate and take care of customers' personal data. There's so many different aspects that go into that big picture view of what is digital trust.

 

 

Ojas Rege:

Eric is currently working with a client on that big picture view of digital trust. It's a global pharmaceutical company that's transitioning its customer data from a third-party provider to in-house systems. Their goal is to manage the data themselves and to responsibly unlock its value using AI.

 

 

Eric Bowlin:

As they are going through that process, they have also realized there's a lot of new consent purposes that they want to use this data for, which is always an interesting challenge at organizations. The privacy folks always want you to re-consent, and the marketing folks always say, "Nope, we're good. We don't want to lose all this data that we already have access to." And so there's always, I think, a struggle there.

 

 

Ojas Rege:

Many of us can relate to this, trying to balance consent with the growing pressure to use information in new ways. So how do you get consent right and harness the power of data for AI? How does strong governance reduce risk and build confidence along the way? Stick with us to hear Eric's insights on navigating these trade-offs, including how organizations are beginning to use new tools to actually talk to their data.


I'm Ojas Rege, GM of Privacy and Data Governance, and this is Trustonomy, an original podcast from OneTrust. This season, we're exploring AI-ready governance, how companies are modernizing programs to move at AI speed and turn oversight into advantage. We go behind the scenes to see AI governance in action. Follow and share the show. Let's take this journey together.

 

 

Eric Bowlin:

We are working with some AI agents now at this organization where data engineers can basically chat with the data, which is great because it really takes some of the complicated SQL and custom reporting out of the equation and it puts that power directly into the hands of the business. It's a really powerful tool when you think about it.

 

 

Ojas Rege:

We've all talked about this challenge for years, organizations sitting on terabytes of consented client data and not having a great way to interrogate or analyze it, not to mention how long it takes to manually go through it all. That's changed with AI, and it's brought new opportunities from streamlining and helping with analytics, sales, and personalization to potentially saving lives.

 

 

Eric Bowlin:

This particular client has created patient assistance programs and different digital apps that patients can use to actually interact with the organization. And so patients are putting in there what drugs they're taking, how often they're taking them, what their dosages are, maybe what their symptoms are, and how they're feeling. There's just a wealth of data there that is being collected and we've now put this agent on top of it, which now my client can use that to predict all sorts of adverse events. We can use that agent to interrogate all that data to now maybe you can introduce maybe some prompts to that digital companion, that app on their phone. And that's such an important part of the treatment cycle and making sure that people get healthy is got to take your drugs on time.

 

 

Ojas Rege:

Eric just laid out a great example of how a pharmaceutical company can use AI agents to interact with data. And that's something many organizations want, the ability to use customer insights more effectively to inform better decisions. At the same time, this groundbreaking technology introduces a few new challenges.

 

 

Eric Bowlin:

As AI has come onto the scene, I think it has forced all of us to think a whole lot more broadly about use cases because when we work with our clients to manage consent, that's the starting point, right? It is like what's the business objective, and therefore, what's the personal data that needs to be obtained? What is the consent language that needs to be included to collect the consents to be able to drive business value? And so when you think about how AI has changed that, the number of use cases has just exploded. And so now, I think it also forces us to think a lot more broadly, right? In terms of all the different ways that this data could be used downstream.

 

 

Ojas Rege:

This is the power of AI, the ability to use data in so many different ways. But if that data is going to be used downstream for new use cases, how do you maintain transparency with customers on what you're actually doing with their information?

 

 

Eric Bowlin:

We've seen a lot of news out there where organizations have been using personal data to train AI models without collecting explicit consent to do so. And that's really been creating a lot of noise and trouble with regulators.

 

 

Ojas Rege:

So it forces the enterprise to really think through what they're going to be using the data for. And just because AI lets them use the data for something else, it doesn't mean they suddenly can.

 

 

Eric Bowlin:

Absolutely. I think that's the next maturity level in consent management within all industries broadly, is that organizations need to be thinking about consent that way. I think, unfortunately, many have thought, "Hey, once I've got the data, I can use it for whatever I want," right? But just because we have the data in-house, it doesn't mean that we can do whatever we want with it.

 

 

Ojas Rege:

Let's pause on that for a moment. This is a core tension. If we ask for re-consent, are we risking losing all consent for that data?

 

 

Eric Bowlin:

The privacy folks always want you to re-consent, and the marketing folks always say, "Nope, we're good. We don't want to lose all this data that we already have access to."

 

 

Ojas Rege:

That push and pull is exactly why consent isn't just a box to check. It's the foundation for building trust with your customers.

 

 

Eric Bowlin:

It is literally step one. The first thing we have to do before we can do anything else with that data is we have to collect consent, right? And so that's why consent has been so critical for this organization as we've worked with them is they understand that that, it's the building block. It's that first critical step.

 

Personal data is collected for specific processes and for specific purposes. And you need to respect that consent and make sure that you're using the data consistently with the purposes for which it was originally collected, right? And so that's the process that we (Deloitte and the clien) went through here to make sure that we were honoring those consents and being consistent with the consent that was originally provided by those patients or those physicians. And if it isn't, well, then we're going to have to go back and revise the consent language and re-consent people.

 

 

Ojas Rege:

That is exactly what Eric's pharmaceutical client is doing for some of their use cases.

 

 

Eric Bowlin:

And so we are actually working with them right now on a strategy for re-consent.

 

 

Ojas Rege:

But for other use cases, they found it isn't a necessary step.

 

 

Eric Bowlin:

They were sticking with the original processing purpose for this data, using it consistent with the way that it had been used before, just putting data on top of it to better interrogate the data, which did not materially change the type of consent that needed to be collected.

 

 

Ojas Rege:

And when making decisions around consent as that first critical step, is there a difference sometimes between what is required legally and what you need to do to establish trust?

 

 

Eric Bowlin:

That is a conversation that we are having with our clients every day. Just because you're doing what's legally required, it doesn't mean you're doing the right thing to build trust with your constituents, right? And I think that's the big difference there. And so we're always trying to help our clients think through that in terms of, "This isn't about what's legally required. We need to think about like, 'What's the right way to treat your constituents and how do we help you do that in terms of how you engage with them, being transparent in terms of collecting data, then also the follow through?'" Right? You have to make sure that when you make that commitment to them upfront in terms of how you craft a notice, how you collect consent, what that engagement looks like, that you actually then follow through on it. And you're a ethical custodian of it going forward in terms of the day-to-day management of it.

 

 

Ojas Rege:

When you think about this specific client and this process they were going through to build these new AI capabilities on consent to data, was there a moment where you stepped back as a team and said, "Wait a second. It isn't clear what's required. We're going to have to make some assumptions"?

 

 

Eric Bowlin:

Yeah, absolutely. For this specific organization, they did the right thing in terms of making sure that they were bringing a cross-functional team to the discussion, right? Which means that it wasn't just a data privacy team. It wasn't just their AI governance team. They were also bringing data management folks. They were bringing the business to the discussion and making sure that they were having regular conversations about, "What are the use cases that we're actually hoping to drive here? How are we going to use this data?" Thinking about next best action, thinking about the business, being able to query that data and making sure that they're getting that cross-functional team in the room to think through some of those challenges, which I think, right? From an AI governance standpoint, I think we've seen that's been a critical key to success for a lot of organizations, right? Is different functions can't make these decisions in a silo. You really do have to get different knowledge sets in the room to make sure that you're making well-informed decisions about how you're going to work with this data.

 

 

Ojas Rege:

Was there any debate in this particular client around what guardrails were going to be required to make sure this data was not used inappropriately?

 

 

Eric Bowlin:

Yeah. This client, in particular, there's been some really interesting discussions about the patchwork of U.S. state privacy law. And a lot of the data that we were using here falls under some of the sensitive data consent requirements that we've seen pop up at different states across the past three to five years here in the U.S., and thinking from a technology standpoint in terms of like, "How do you actually enable that? Do we create a common denominator across the entire U.S. in terms of saying like, 'Here's the situations where we need to make sure that we capture consent and we're going to treat it this way across all U.S. states'?"

 

Well, this specific client decided that they wanted to handle each state individually and separately in terms of how they were capturing, cataloging, archiving their consent data, which when you think about it, right? When you think about the patchwork of U.S. state privacy laws that we've got now, right? 20 plus state privacy laws and they all have their own different twists and turns in terms of the requirements, it's been quite the engineering challenge for us. But we were able to do it working in concert with them.

 

 

Ojas Rege:

From this project, any learnings about the role consent plays within a broader AI-ready governance framework that you can then generalize to other companies?

 

 

Eric Bowlin:

For a lot of organizations, I think as they've been thinking about the risks of AI, reminding ourselves that they're really just privacy risks that have been around for a long time, right? It all comes down to someone having personal control over their own personal data and how it's used, right? It's fundamental. It's something that we've been talking about since what? 2016, probably, with GDPR. And so I think that's the important thing that has not changed. We're talking about just respecting personal data and how it's used. So I think that's the critical building block there. And like I said, when you think about rights to data, the first thing we have to do before we can do anything else with that data is we have to collect consent.

 

 

Ojas Rege:

Governance, and specifically around consent, but even more broadly, becomes so key in all of this. There's a challenge that a lot of companies face where they worry that governance is going to slow down innovation instead of enabling it. What are your learnings so far?

 

 

Eric Bowlin:

It's funny you ask. I had this conversation with a client 90 minutes ago talking about this exact topic. And what I said to them is a couple things. One is let's make sure that we're thinking about things from a risk basis, right? Let's not take the same risk management sledgehammer to everything, which, I think, is really important. We need to use the 80/20 rule, right? And expend 80% of our effort on the 20% of risk that's really the high risk things that we really need to worry about.


The other thing I would say is that I think it's so important for organizations to establish guardrails to help their business users understand, "Hey, we've defined this as okay. And as long as you stay within this from an AI perspective in terms of the use cases, how you're using personal data, all those things, that it's okay. And if you're within that, go fast, right? But when you get outside of those guardrails, then, obviously, we need to apply more controls and more governance and more risk assessment. The further you get outside of it, the more that we need to manage the risk associated with it."

 

And so that's how I think of it is it's similar to privacy risk. I tell clients all the time that there's really maybe four to eight questions you can ask about most things upfront to get a good feel for what's the risk level associated with this. And if the answers come back good, let's not slow down the business. Let them keep going. But if those answers throw up some red flags, well, then we need to take a look at it and make sure that we're handling that data ethically and not doing anything that's going to erode that digital trust that we have worked so hard to build.

 

 

Ojas Rege:

Given the uncertainty around AI regulation, what's one piece of advice you would give organizations trying to build an AI-ready governance strategy with consent as a foundation?

 

 

Eric Bowlin:

Probably, the thing I would say right now, when we think about, right? The federal government's moratorium on AI regulations for states and what's the uncertainty that we're going to have there, and nobody really knows where that is going to go, what I would say is that the burden of treating your constituents, your customers, your patients, whoever it might be in an ethical way and respecting them and their personal data and how you manage that, that's not going to change, right? And so when I think about digital trust, that's how I think about it. Organizations need to do right by their customers and by their constituents, handle their data in an ethical way regardless of what the regulations say. If organizations stick to those principles and make sure that they're governing by them on a regular basis, then it makes it a lot easier to stay in compliance when the measuring stick for compliance, I feel like, is changing on a daily basis.

 

 

Ojas Rege:

So does AI actually change any of those principles of digital trust or not?

 

 

Eric Bowlin:

I don't think it does. You think about digital trust, right? Through omnichannel experiences, right? Whether it be in-person, text, email, whatever it might be, we're trying to use all these different touch points with a customer to build trust with them and help them understand that, yes, we're going to use their data, but we're going to collect it and use it in responsible ways regardless of whether that is data that has been filled out on a paper form and shared with the company or data that has been submitted through a website that is going to go into a data warehouse that is going to be interrogated by an AI model sitting on top of it. The concepts and that trust that you have to maintain should be the same regardless of how it's collected, where it's sitting, how it's being used. It has to be consistent across the board.

 

 

Ojas Rege:

If I give you my personal [inaudible 00:18:19], I expect you to treat it ethically and responsibly no matter what format it comes to you and what technology you use.

 

 

Eric Bowlin:

Exactly. That's it. It's very simple.

 

 

Ojas Rege:

As Eric said, the notion of treating data responsibly is simple to express, but not always simple to implement. Here are a few compelling points that stood out to me in my conversation with Eric. Consent isn't just an ethical exercise. Consent drives trust, and there is absolutely a business case for customer trust because so much of AI's value comes from personalization, customer programs and engagement. Consent becomes a foundation of the data strategy that feeds AI. Consent is not a one and done deal. It's an ongoing conversation with your customers.

 

AI now offers an opportunity to think more expansively about how we use customer data. That's true, but we always have to continuously evaluate those new use cases and check whether they map to the consent we already have. If not, we either need to adjust the use case or continue the conversation with the customer to get re-consent. And the principles of your digital trust strategy don't change with AI, just changes how you implement them. You still focused on what you're using data for, but now you may need to be more transparent and help customers understand some of the new ways that their data is or might be used in the future.

 

This is Trustonomy. And in this series, we're exploring what AI-ready governance looks like, not just in theory, but in practice. If you've picked up something useful, share the episode and follow the show so you don't miss the next one. We'd love to hear what's working for you, what lessons you've learned, and what you want to hear more about in future episodes. Visit us at onetrust.com/podcasts/trustonomy. I'm Ojas Rege. Thanks for listening.

More from Trustonomy

Predicting vs Adapting: Future-Proofing Your AI Framework

How do you prepare for the future of AI when things are moving so fast? Nick McQuire advises C-level executives on emerging tech and enterprise innovation. He looks ahead to where AI could be going - the risks and the opportunities - and reveals how to future-proof your organization and AI Governance program now.

 

24 min

April 27, 2026 24 min read

Learn more

Why Everyone Should Help Break Your AI

You don’t have to choose between fast and safe. Pam Snively, TELUS’ Chief Data & Trust Officer, shares how she convinced internal skeptics to purple team TELUS’ first customer-facing Gen AI support tool and what this innovative approach taught her about balancing risk, speed, collaboration, and innovation in AI Governance.

 

23 min

March 30, 2026

Learn more

AI Ambassadors: Bridging the Gap Between Builders and Accountability

It’s hard to scale AI Governance successfully, but it’s not impossible. Milin Chhanechhara, Lead Data Scientist at Lumen, and Andrew Gaskins, Principal Solution Architect at Lumen, share how one program has broken down silos in their organization and what that’s made possible for their global teams.

 

23 min

March 16, 2026

Learn more

Bottleneck to Breakthrough: AI Governance That Scale

Bryan McGowan of KPMG explains how rigid, one-size-fits-all AI governance created a backlog of stalled use cases—and how a flexible framework can scale AI securely while preparing for agentic AI, identity controls, and lifecycle testing.

March 02, 2026 18 min read

Learn more

Episode 5: The Tylenol murders and the trust recovery

This season we’ve been sharing stories about companies and organizations that made mistakes and lost public trust. In this episode, we look at a company that did nothing wrong but had to find a way through a crisis to rebuild trust.

November 23, 2023 28 min read

Learn more

Episode 4: The privacy breakdown that betrayed a nation

Many companies collect personal data - names, birthdays, interests, payment information, and geolocation. But there’s no data more private and sensitive than biological data. So what happens when that information is used without consent?

November 09, 2023 28 min read

Learn more

Episode 3: The missing data that doomed Pearl Harbor

Companies run on data. It’s the backbone that allows them to understand their customers, make informed decisions, and see the big picture. But what happens when you don’t know what data you have, where it is, or how to access it?

October 26, 2023 30 min read

Learn more

Episode 2: Blowing the whistle on the Space Shuttle Challenger disaster

There’s a fine balance between getting things done and getting them done the right way. Every business has deadlines, technical hurdles, and contractual pressures to consider. But what happens when you create an environment that prevents people from sharing ideas and concerns?

October 12, 2023 28 min read

Learn more

Episode 1: The safety shortcuts that sank a steamboat company

When you run a business, you build relationships with other businesses. They become your vendors and suppliers. But what happens when these third parties make decisions that put your customers and your business at risk?

September 28, 2023 26 min read

Learn more