Skip to main content

On-demand webinar coming soon...

On-demand webinar coming soon...

March 30, 2026

Why Everyone Should Help Break Your AI

“OneTrust graphic with the word ‘trustonomy’ in large white text on a green background, showing a stylized hand holding a tablet, with the OneTrust logo in the top left.

About this episode


You don’t have to choose between fast and safe. Pam Snively, TELUS’ Chief Data & Trust Officer, shares how she convinced internal skeptics to purple team TELUS’ first customer-facing Gen AI support tool and what this innovative approach taught her about balancing risk, speed, collaboration, and innovation in AI Governance.

Ojas Rege, host of the Trustonomy podcast

Host
Ojas Rege

Pam Snively's headshot

Guest
Pam Snively





Ojas Rege is SVP and GM of Privacy and Data Governance at OneTrust, with 35 years of experience in enterprise security and applications. He advises global organizations on responsible data and AI strategy. His perspective on technology has been featured in Bloomberg, CIO Magazine, Financial Times, and Forbes. Ojas holds BS and MS degrees in Computer Engineering from MIT, an MBA from Stanford, is a Fellow of the Ponemon Institute, and holds CIPP/E and CIPM privacy certifications..

Pamela Snively, Chief Data & Trust Officer at TELUS, leads enterprise-wide privacy, data and AI governance, data ethics, data management, and key compliance functions, grounding TELUS’ approach to data governance in customer trust. Under her leadership, TELUS has advanced transparency through initiatives like the refreshed TELUS Privacy Centre, offering clear, layered insights into data practices and customer protection. An active contributor to global AI and privacy forums, Pam serves on multiple governance bodies and is the founding member and Chair of Canada’s Business Privacy Group, and is a frequent speaker advocating for stronger consumer trust across the digital ecosystem.

Ojas Rege:

 In tech, we're often told there's a trade off, move fast or govern well. But in the age of AI, that's a false choice and a very risky one. On this episode of Trustonomy, we're talking about why governance isn't the enemy of innovation, but a discipline that actually enables it. I'm joined by Pam Snively, Chief Data and Trust Officer at TELUS.

 

When TELUS was getting ready to launch its very first, generative AI customer support bot, Pam was right in the middle, pulled by teams eager on one side to move fast, and teams on the other side worried deeply about risk. What she'll share isn't a compromise, but a different approach altogether.

 

Back in 2023, Pam Snively found herself in a difficult spot.

 

 

Pam Snively:

We're at this point where there's a group of people that do not want to do any further testing, and the view was if we set up a whole new process for testing, it's going to slow us down. I still remember them saying, "Pam, we build this from the ground up. We know this tool, you're not going to find anything."

 

 

Ojas Rege:

Pam's the Chief Data and Trust Officer at TELUS, a leading communications technology company operating in more than 45 countries. The tool in question was their first Gen AI customer support bot. Pam was feeling the pressure to move fast from one team.

 

 

Pam Snively:

And then there were the people on the other side, largely on my team, but also other areas of the company hugely skeptical about putting anything Gen AI in front of our customers that it was just too inherently unreliable. It wasn't a great moment to be sitting there saying, what do we do?

 

 

Ojas Rege:

Here's some context on Pam's role as chief data and trust officer.

 

 

Pam Snively:

I have the privacy portfolio and the data governance portfolio, but over time we added data ethics to that. We now have AI governance, so really all of the areas that can impact customer trust, stakeholder trust when we're using data.

 

 

Ojas Rege:

All of these areas were at play in the launch of the AI tool, in the pushback Pam got and in guiding her next steps. When TELUS was first working on Gen AI, it was pretty novel.

 

 

Pam Snively:

I think still is very novel. There are a lot of organizations that are very hesitant to do that, so we were trying to figure out how we could take a support bot that we had developed internally and put it in the hands of our customers.

 

 

Ojas Rege:

But balancing risk and safety isn't easy, especially when we're talking about a new AI tool.

 

 

Pam Snively:

At the time, we really wanted to move quickly. We wanted to get it out there. We saw the potential for AI and so there was a lot of pressure to build this thing and to move quickly on it. But we also knew that there were the risks because it was Gen AI and because we know that Gen AI is inherently not perfectly reliable. And we also know that it was a hot new thing at the time. It still is. And when an organization launches anything public facing, people are going to test it. Media, hackers by of course just customers using it, but we knew it would go beyond just the way customers use it. We knew people would actually try to break the system. So that was something that we worried a great deal about.

 

 

Ojas Rege:

A lot is at stake whenever you introduce AI into the equation, especially if it's customer facing. We all hear stories about bots gone rogue.

 

 

Speaker 3:

How can I help you?

 

 

Ojas Rege:

So what did Pam do to get Gen AI to market given all of these risks? I can talk to her all day about this stuff. She's not afraid to be real or to get into the nitty-gritty, and that's exactly what happens in this conversation. Find out how she convinced the skeptics to keep testing. How she balanced the pressures of speed and safety and what this moment taught her about AI governance in action.

 

I'm Ojas Rege, GM, Privacy and Data Governance and this is Trustonomy, an original podcast from OneTrust. This season we're covering AI governance, the mindsets, frameworks and strategies business leaders are using now to set themselves up for responsible innovation and operational excellence. Follow and share the show as we take this AI governance journey together. So we left Pam in a tight spot, maybe even sweating a bit, I know I'd be. TELUS wanted to release their Gen AI bot and Pam was getting internal pushback. She knew the bot needed more testing and she knew existing security checks wouldn't cut it. They needed a new approach.

 

 

Pam Snively:

I still get excited when I think about this process. So we looked at who would be best positioned to assess the risks associated with this, and we realized that it wasn't the traditional, oh, it's the privacy team or the security team or the development team. We realized that the risks come from potentially anywhere and could be dreamed up by anyone. So we thought about pulling together red teaming, the traditional approach of assessing risk with blue teaming, the traditional approach of closing those risks, addressing those risks. And said, okay, well let's call it purple teaming. It suited us well because one of our corporate colors.

 

 

Ojas Rege:

So instead of the traditional split, one team simulating attacks while the other responds in patches, the red and blue teams now operated together, integrating offensive and defensive testing into a continuous feedback loop. And the call to be part of this purple teaming exercise went way beyond the security team.

 

 

Pam Snively:

We talk a lot about building inclusive AI, so we thought we should build our processes for both building it and testing it in an inclusive manner. And so we went across the whole organization and we opened up to everyone in the organization the opportunity to try to test and work with this new support bot.

 

 

Ojas Rege:

That's a lot of people, definitely to pull into.

 

 

Pam Snively:

It was a lot of people. I think it really suits the philosophy that we have around AI to have this very cohesive approach and inclusive approach to how we test for risk. And so we didn't define the parameters at the beginning. We left it fairly wide open so that people could figure out for themselves what they thought would be a good way to try to break the bot.

 

 

Ojas Rege:

If I understand correctly, this call to break the bot was open to the whole organization, over 70,000 employees. I can see why some of your colleagues were concerned. The testing process sounds really hard to manage.

 

 

Pam Snively:

I can tell you, Ojas, it was not well received. The development team had worked really hard on this tool and they had been testing it in the traditional manner internally and we'd been using it internally. And so there was a lot of confidence in it and the view was it's going to slow us down and we're highly unlikely to identify any new risks that we haven't already identified, any new vulnerabilities. And then of course there were also the people that would traditionally be in the blue team, a lot of those were on my team.

 

 

Speaker 3:

This is too risky.

 

 

Pam Snively:

Who were quite dubious that we would actually be able to close the risks. They were very concerned about putting this technology in front of customers. And so really I was getting it from both sides, this is a waste of our time.

 

 

Ojas Rege:

So how did you convince both sides that this wasn't a waste of their time, that purple teaming was the way to go or at least something that needed to be tried?

 

 

Pam Snively:

We do have a very strong customer-first culture, so I was able to pull on that as a reason to proceed with the testing. We appeal to them on the basis of protecting our customers and protecting the brand. And I think also in those moments where there just isn't enough information, at the end of the day we had two really loud voices, one saying there are no risks and one saying there are too many risks that the only real choice for me and what I had to persuade both teams of was we need more information to prove you out and then we won't have to do it again if you're right.

 

 

Ojas Rege:

So you used the scientific method to combat skepticism. I love it.

 

 

Pam Snively:

I guess that's right. You said that way better than I did, Ojas, thank you.

 

 

Ojas Rege:

You and your team were the face of this new approach. How much pressure were you feeling for it to work?

 

 

Pam Snively:

Well, I can say a lot of pressure. We were doing what everyone doesn't want to do and that is slowing down innovation by going through this process. And then we were also, as I say, very concerned that if we didn't find things that maybe this wasn't the way to do it. And then very concerned that we wouldn't be able to solve for what we did find. Once we started the purple teaming, I was a little nervous, didn't know what was going to happen.

 

 

Ojas Rege:

Imagine waiting as hundreds of people work on trying to break the bot, trying to find those hidden vulnerabilities and working on how to fix them all at the same time. And then Pam gets an email.

 

 

Pam Snively:

One of the directors on my team who had really been spearheading the whole thing, Jesslyn Diamond, she reached out to me not long after it had started and sent me just a ping saying, "We're finding a lot of vulnerabilities." And so on the one hand I'm thinking, oh, yay, it worked. And on the other hand I'm thinking, oh no, we're not going to be able to launch. For those of us who our job is to identify risk and we knew there would be certain risks with a Gen AI tool even we were surprised at what we found because as I say, we had been using this internally, but we hadn't been trying to break it internally. Honestly, I didn't know how much they would try to break it.

 

 

Ojas Rege:

Turns out the process of purple teaming took far less time than Pam expected to both find vulnerabilities and to close them. Purple teaming paid off.

 

 

Pam Snively:

From that moment forward, no one has ever questioned us using this technology again. They were just thrilled that we found these things so that we could close them off. And then the good part was we did close them off. So every single vulnerability that we found, we were able to address. Now admittedly, when we first addressed those, it was a little clunkier than we would've ideally liked. When you put all the guardrails in, it doesn't move as fast as the process and system without the guardrails, but boy, it's worthwhile putting the guardrails in and it made us feel a lot better about it. Still a little nervous when we launched, but what we were able to do is over time we were able to actually hone those guardrails and find alternative ways to address some of them or just make them more nuanced so that we could improve the technology over time without taking any of the risks.

 

 

Ojas Rege:

What did this exercise teach you about how to manage those risks?

 

 

Pam Snively:

The purple teaming showed us, it was like a little microcosm of what we were experiencing, which was we had to move faster than we have in the past because there is so much happening with AI and I think there's a strong desire on the part of TELUS to be leaders in AI, but also leaders in responsible AI. I think what it did is it allowed us to really see that we can approach risk differently. So we started talking about risks to individuals, risks to society and risks to TELUS when we do risk assessment methodologies, and that also gets balanced against benefits and we hadn't really done that before.

 

I mean we've always talked about proportional risk, but we're really doing it consciously around the benefits. And on the one hand, as I say, we want to be rigorous, but we also don't want to try to strive for perfection if the alternative is far from perfect. So are we expecting perfection from AI when the human doing the job might be very fallible as well? We shouldn't be. We should just be looking for improvement and making sure we aren't introducing new or unacceptable risks. So that's required us to think a little differently about risk and it's a much more nuanced, maybe contextual and complex approach than we've had in the past.

 

 

Ojas Rege:

This point on nuance is really interesting because when risk is so nuanced and complex, how do I know whether I'm doing well or not? What metrics are you and the team using to measure success?

 

 

Pam Snively:

Still working on that. I think that continues to be a challenge for anyone in the risk space. What are your metrics for success? I mean, certainly we're trying to avoid any harm, but we are looking at participation in the solutions. We're looking at the number of risks that we can identify and the number of risks that we can close off. And so in that purple teaming exercise, we identified a large number of risks far more than we expected and we closed off every one of them. So that's a great metric. And then we have to look at over time new things. The risks that we identified through that purple teaming are not the only risks. We knew those were the ones we identified at that time and we would continue to identify more over time.

 

And so monitoring over time to see what else pops up has become a really important aspect of our risk measurement methodology, but still working on what the risks are because I think it's very contextual. I don't think we can just say, here are our metrics for AI risk and they're going to stay the same. It really depends on where we're launching your metrics. When you're looking at AI risk in an individual healthcare environment is very different from a telco environment, and we're operating both of those environments, so we have to really think differently about what's at stake and be very human-focused and contextual.

 

 

Ojas Rege:

One of the most gratifying aspects of this story for Pam is how many people participated and continue to participate in purple teaming projects at TELUS.

 

 

Pam Snively:

We've always talked a lot about transparency and trust and their relationship, but I think that inclusivity part of transparency allow people to be part of the solution. So talk about the problem and pulling people in is just so much more impactful than I would've ever imagined.

 

 

Ojas Rege:

It's a lesson Pam saw again and again that people truly cared. That colleagues at all levels thought a lot about AI. She saw this even before TELUS launched Gen AI externally, when they launched a Gen AI bot internally.

 

 

Pam Snively:

We went through a really rigorous process testing that as well and we felt really good about it, and then we launched it. And overnight I was inundated with calls and emails and texts from all corners of the company. People who had never reached out to me before. I've never had that experience in the 10 years I had been at TELUS. People saying, is this okay? Is this safe? Have you looked at this? Are you okay with this? Is this going to hurt our customers? Is this going to hurt our team members? Is there a risk to this? Everyone was really quite alarmed that we were launching something pretty early on that was Gen AI driven and for good reason. The media was rightly full of stories of these things going wrong, and so their fears were legitimate. But what I realized was, no, we did have it. We had it. We were on top of it. We had absolutely assessed it, but we forgot to tell them that and they didn't know.

 

And so what we learned from that was we cannot just lead with the technology. We have to lead with the information that we have done this responsibly. And so every single thing that we have put out about AI within TELUS since then, we start off with talking about how we made sure it's safe, how we made sure it's responsible, and that has made all the difference.

 

 

Ojas Rege:

I want to stress Pam's point here. In the end, it's not about the shiny tech. It's about proactively showing your customers and your employees that the technology's safe and you are using it responsibly.

 

 

Pam Snively:

I never want to get all those emails and notes and messages again. But again, it was really gratifying that we did because for me it meant our team members are actually vigilant to these risks and they care about the brand and they care about our customers and they care about each other enough to reach out and ask me those questions. So that told me that we had been successful with the kind of culture transition that we were seeking, which is this is powerful technology and we have to use it well and responsibly.

 

 

Ojas Rege:

Well, one of the challenges of managing this particular technology is its ubiquity. Everyone ends up using it. How did you think about that when you're designing the program?

 

 

Pam Snively:

What it meant was that we had to really design a program that is coming from the ground up and is really a data AI culture. We can't have a privacy program in an ivory tower. The privacy office or the security office or the developers are the only people who have knowledge about this and can control it and set the policies and we're done because all of a sudden, as you say, the most powerful technology we have ever seen was in the hands of every single one of our employees overnight. We had no preparation for this, just like every other organization had no preparation for this. And so we had to turn to really a culture shift or leveraging our existing culture. And I've talked about that a couple of times with we have a very strong customer first culture, and we have a very brand conscious team members.

 

And for that, I'm super grateful because it made our job a lot easier, but it meant that we had to up the ante on our data literacy and move to an AI literacy program and put the responsibility for doing the right thing in the hands of every one of our employees and encourage them to do so because we can't automate our way out of this. We can't mandate our way out of the risks that come with having that type of technology in the hands of everyone. We had to put in place the rules and the guardrails, but then really encourage our team members to do the right thing by showing them what could go wrong and what could go right.

 

 

Ojas Rege:

Or AI literacy goes hand in hand with collaboration, can't have one without the other or it's difficult to have one without the other. But so many companies I've talked to struggle with this collaboration because of their silos. How did you handle that?

 

 

Pam Snively:

We've talked for so long about breaking down silos. It really doesn't work. Everyone talks about it and we have for decades in various contexts. It doesn't really work. I think we just need to design something that will work regardless of the silos that pop up. And so it's got to be something horizontal that recognizes there's going to be silos, there will always be silos. There are people who specialize in certain areas, there are business teams. And so we need to design a governance structure that allows for the existence of silos rather than constantly fighting to break them down.

 

 

Ojas Rege:

It's so nice to hear someone at a leadership level talking about the human elements of AI governance because we're all in the trenches trying to figure this out, and it's hard.

 

 

Pam Snively:

It is really hard. No one's got this nailed. And what I've found is since I started talking a little bit about this process and the purple teaming, and when we went public with our Gen AI support bot, I heard from a lot of people in similar roles at other organizations saying, how did you get comfortable with this? What was the process you went through because we can't get there. I also thought that was a really great sign for society that what was holding people back was concerns about whether it was responsible. So if you can nail that, if you can figure out a process like purple teaming or using other tools, then I think we have the opportunity to really realize the benefits. But it was interesting to hear, and I still hear it all the time, that that's what holding people back and no one's got it all figured out. We're learning new stuff every single day.

 

 

Ojas Rege:

I learned so much from my conversation with Pam. I hope you did too. Here's what stuck with me. The TELUS purple teaming exercise shows us that you can break the compromise between fast and safe. You don't have to choose one or the other. Many organizations think they do have to make that decision. That is a dangerous choice. You can do both. You can adopt AI quickly while still ensuring customer safety. Next, one of the most gratifying moments from the TELUS experience is how deeply their employees cared that AI was deployed safely for their customers. Your employee base is one of your most valuable assets for testing new AI tools and acting as a human guardrail, protecting the interests of customers.

 

AI is collaborative by its very nature, but organizations are siloed by their very nature. Every organization must design a governance structure that allows for them to work and succeed with the existence of silos as a practical reality. They're not going to go away overnight, but you've got to deploy AI now. And finally, AI requires a more complex and nuanced approach to risk than we've had in the past. We have to remember that no one has got this nailed. So the goal isn't perfection, it's constant improvement without introducing unacceptable risk.

 

This is Trustonomy and this series is about what AI governance looks like, not just in theory, but in practice. If you've picked up something useful, share the episode and follow the show so you don't miss the next one. We'd love to hear what's working for you, what lessons you've learned, and what you want to hear more about in future episodes. Visit us at onetrust.com/podcast/trustonomy. I'm Ojas Rege, thanks for listening.

More from Trustonomy

AI Ambassadors: Bridging the Gap Between Builders and Accountability

It’s hard to scale AI Governance successfully, but it’s not impossible. Milin Chhanechhara, Lead Data Scientist at Lumen, and Andrew Gaskins, Principal Solution Architect at Lumen, share how one program has broken down silos in their organization and what that’s made possible for their global teams.

 

23 min

March 16, 2026

Learn more

Bottleneck to Breakthrough: AI Governance That Scale

Bryan McGowan of KPMG explains how rigid, one-size-fits-all AI governance created a backlog of stalled use cases—and how a flexible framework can scale AI securely while preparing for agentic AI, identity controls, and lifecycle testing.

March 02, 2026 18 min read

Learn more

Episode 5: The Tylenol murders and the trust recovery

This season we’ve been sharing stories about companies and organizations that made mistakes and lost public trust. In this episode, we look at a company that did nothing wrong but had to find a way through a crisis to rebuild trust.

November 23, 2023 28 min read

Learn more

Episode 4: The privacy breakdown that betrayed a nation

Many companies collect personal data - names, birthdays, interests, payment information, and geolocation. But there’s no data more private and sensitive than biological data. So what happens when that information is used without consent?

November 09, 2023 28 min read

Learn more

Episode 3: The missing data that doomed Pearl Harbor

Companies run on data. It’s the backbone that allows them to understand their customers, make informed decisions, and see the big picture. But what happens when you don’t know what data you have, where it is, or how to access it?

October 26, 2023 30 min read

Learn more

Episode 2: Blowing the whistle on the Space Shuttle Challenger disaster

There’s a fine balance between getting things done and getting them done the right way. Every business has deadlines, technical hurdles, and contractual pressures to consider. But what happens when you create an environment that prevents people from sharing ideas and concerns?

October 12, 2023 28 min read

Learn more

Episode 1: The safety shortcuts that sank a steamboat company

When you run a business, you build relationships with other businesses. They become your vendors and suppliers. But what happens when these third parties make decisions that put your customers and your business at risk?

September 28, 2023 26 min read

Learn more