Milin Chhanechhara:
We were heavily bottlenecked. We were spending majority of our time, not just in the interview process, but what we call reactive oversight. And it wasn't sustainable for us.
Ojas Rege:
Milin Chhanechhara is rare in the AI governance world. He started in data science, then stepped into AI governance right when the industry needed leaders who could do both, bridging the gap between the people building AI and those accountable for its impact.
Milin Chhanechhara:
And I was super frustrated because everyone was coming to me for having their AI project reviewed. And I was really, really stressed about that. We were just a three member group. Everything was chaotic. We were bombarded with so many different AI projects, having this reviewed by stakeholders, security, data privacy, responsible AI group.
Ojas Rege:
Sound familiar? Overwhelmed teams, too many reviews, and no clear way forward. Milin is the lead data scientist and architect of AI governance and responsible AI practices at Lumen Technologies. He works closely with Danielle Maves, VP of AI governance. Lumen is a global digital infrastructure company building the network backbone that makes AI possible. Up until last year, Lumen teams working on internal AI projects had no efficient way to share insights, reuse resources, or build on each other's work. Danielle, Milin, and the AI governance team had to make sure every AI system was safe and compliant.
Milin Chhanechhara:
Data scientists, engineers, solution architects, a lot of these core technical teams may not have a complete understanding of AI regulations, like what exactly they need to comply with. Do we even have a process in place? Do we have a policy? So a lot of these things comes from the knowledge gap or teams working in silo when building their AI products. So we needed something to break that silo, something that allows people to share, exchange ideas, to utilize each other's resources.
Ojas Rege:
One of the biggest challenges in AI-ready governance, overcoming the inefficiencies caused by silos. I hear about it all the time. In today's episode, you'll discover how Lumen Technologies designed a solution that improves collaboration across teams without reinventing the wheel. We'll cover how it's already helping them scale AI deployments and the strategies they're using to manage AI-ready data.
I'm Ojas Rege, GM of Privacy and Data Governance, and this is Trustonomy, an original podcast from OneTrust. This season, we're exploring AI-ready governance, how companies are modernizing programs to move at AI speed and turn oversight into advantage. We go behind the scenes to see AI governance in action. Follow and share the show. Let's take this journey together.
I mentioned that Lumen plays a significant role in the infrastructure behind today's AI systems. Their network reaches over 70% of global IP addresses, a footprint that puts them at the center of modern connectivity. But when they set out to build their internal AI governance program in late 2024, they faced a familiar problem, how to scale governance to keep up with the speed of AI innovation.
Milin Chhanechhara:
Lumen is a very large, complex organization, and every single AI related project can sometimes involve a lot of different stakeholders like solution architects, network engineers, data scientists, data engineers, procurement leads, product managers, which often spreads across multiple organizations within Lumen as well as business units. So we quickly realized that asking every single luminaries to focus more on what exactly AI governance do and how they can navigate through the complexity of almost like 47 different AI regulations now, it really creates a lot of enormous overhead and frustration.
Before, our AI governance team was increasingly focused on playing, I would say like whack-a-mole. Projects were stalling and a lot of these ideas were dying from the death of process. So our leadership asked a very genuine question like, hey, how do we get every team members an immediate access to trusted, knowledgeable resources without forcing them to navigate through the complexity of this AI governance?
Ojas Rege:
It was clear something had to change. The existing approach just wasn't scaling. Without a better way to support teams, innovation was slowing and AI risk was rising, but the AI governance team had an idea.
Milin Chhanechhara:
The ambassadorship program was created explicitly for us to prevent governance from becoming an, I would say, ivory tower exercise, policies written in headquarters. So by placing trained, respected AI ambassadors inside every business unit and geography, the program embeds the expertise exactly where the decisions are actually made. So right now, for example, let's say a project owner or business group wanted to build an AI project. Now, they will reach out to ambassadors first. So when a team wants to pilot a new AI capabilities, their ambassadors will be in their room from day one.
Ojas Rege:
The ambassadors are the first point of guidance for a project. They know the questions that need to be asked like, what are the goals of the project? What are the components? What are the resources needed and who has them? And before signing on with a third party vendor, could an existing internal tool be leveraged instead?
Milin Chhanechhara:
In short, drafting a plan in conjunction with what's out there and in compliance with AI governance policies is what the AI ambassadors started pulling things together to make sure that we have the right assessment. Obviously, every project goes through our AI governance intake process and the review process. So a lot of these things, a lot of those gaps started thinning out due to the ambassadors.
Ojas Rege:
It sounds great. I think I want to be an ambassador if you'll let me.
Milin Chhanechhara:
Yeah, absolutely.
Ojas Rege:
Were there early challenges to getting the ambassador program off the ground, both inside the governance team and across the business?
Milin Chhanechhara:
Yeah. Right now, AI is very common, so people would like to be a part of it, but getting this idea started was one of the biggest challenge for us, selling this to different stakeholders. And I think based on our understanding of what AI ambassadors represent and the investment that we are making in the ambassadors by training them on different responsible AI practices, different AI regulations, that was what actually drove them towards us because now people are starting to realize it's a proactive rather than being reactive when something goes wrong.
So that proactive approach and our strategy was always to be a collaborative, not a bottleneck. We didn't want to stop people from approaching AI. We actually wanted to be partnered with them. In a lot of cases, we were really working with them to help understand there is a need for a certain role that really helps us break the silo and increase efficiencies. Ultimately, once it became a reality that, okay, this is the right thing to do, people were really drawn towards us.
Ojas Rege:
Who are the people who make up this ambassador group?
Milin Chhanechhara:
It could be anybody interested in AI governance and driving the change that we need in Lumen. We have people who are vice presidents of the company to data scientists who are very curious about AI regulations, different legislations impacting AI technologies as a whole. We have people who are solution architects, who are HR representatives. So we help people from all across the organization making up this ambassadors.
Andrew Gaskins:
My name is Andrew Gaskins and I am a principal solution architect here at Lumen Technologies.
Ojas Rege:
Andrew was one of Lumen's first ambassadors.
Andrew Gaskins:
One of the main reasons that I started getting involved with AI as we started to move towards it was personally as a parent, thinking about what the future of AI means and how it's going to shape the future for our kids. So thinking about that and being part of the voice internally, and then as a technologist, I wanted a voice in the responsible AI governance and what we were doing and then understanding that how the ambassador program was bringing together people from all areas of the company. So different perspectives, different views on what they're doing, legal, ethics, security, all of that made a lot of sense to me and it also made it a lot easier to do my job.
Ojas Rege:
Andrew has multiple responsibilities. One of those is low-code tools for building apps, automating workflows, and analyzing data, and specifically bringing AI-powered assistance to those tools. Before the ambassador program, he describes AI's role in his job as ...
Andrew Gaskins:
Fragmented or disjointed is probably the best way to describe it. So everybody's hearing about AI. Everybody has this kind of edict like, let's move forward, and you have to get AI into what you're doing, but without a real understanding of, well, what does that actually mean? So you kind of have all of these people coming to you saying, "Hey, I want to do this." But without that kind of funnel for us to talk to each other and say, "How do we do this? How do we do this responsibly? How do we do this ethically, safe, secure?"
Ojas Rege:
Lumen's ambassador program distributes governance, creating a way to scale trust across the organization. It gives people like Andrew a forum to raise ideas and move forward safely and responsibly. The group, currently 40 Ambassadors in Growing, meets weekly to share insights and learn from each other. It's a culture where all perspectives are valued even when opinions differ, like recently during a discussion about an AI assistant.
Andrew Gaskins:
There were a couple of people in there who had very strong feelings about it and about its capabilities. And it was an opportunity for us to kind of delve deeper. And it really was not well received at the beginning of the conversation. There was just this idea of, no, it's not what we expect it to be. So kind of stepping back and listening to that and saying, "Okay, well, what are your concerns? Where are your concerns around this?" So those were then ongoing conversations which are now allowing us to help them with our platform so they can do work better and faster and understand what those boundaries are. That's the standout thing is this communal knowledge that we're building there.
Ojas Rege:
Those conversations with the ambassadors, all of whom are subject matter experts are the heart of the program. Being in the room and part of the conversation gives Andrew a clear, more complete view of the technology and how it's evolving, and he's not alone.
Andrew Gaskins:
Our VP, Danielle Maves, who's in there most often, a lot of times it's sitting back and listening. You can tell she's taking it all in, she's thinking about it, and I know that that shapes the conversations that they have when they go back to AI.Gov.
Ojas Rege:
In just over a year, Lumen's ambassador program has enabled faster and safer AI adoption. Teams have someone to turn to with questions or challenges. An ambassador who in turn can lean on a network of fellow ambassadors. It makes governance a partner in progress rather than an obstacle and keeps people from either overwhelming the AI governance team or sidestepping rules entirely. Milin, as lead architect on AI governance at Lumen, tells a story about how the ambassador program was pivotal in shaping the company's strategy for AI-ready data. In the past, you've talked about how the ambassador program and other elements of your AI governance program helped you get your AI-ready data strategy in place. Can you first of all define what you mean by AI-ready data?
Milin Chhanechhara:
Yeah, absolutely. And I would say data is a raw material for our AI. And Lumen, we treat it with the same level of governance trigger as we do for algorithms themselves. So what we define AI, we define AI-ready data as data that is high quality, fully traceable, appropriately protected, and deliberately prepared for the responsible use of AI.
Ojas Rege:
I'm just going to repeat that because it can be tricky to define AI-ready data, but so important to get it right. Data that is accurate, traceable, secure, and pre-prepared for responsible use in AI systems.
Milin Chhanechhara:
So before any data set that can be used for training or inferencing, it must satisfy certain strict guidelines enforced jointly by AI governance as well as the data privacy team. So for example, if you use personal or sensitive data, the privacy team will conduct a formal privacy impact assessment. They'll make sure that we minimize collection. We mandate controls through encryption. In a lot of cases, localization, differential privacy. That's one of the reasons data privacy and legal are one of the crucial pillars of AI governance program. We align with them. We partnered with them on day-to-day basis to make sure that every project is in alignment with privacy standards that we have established across Lumen.
And AI governance also helps them out through responsible AI assessment. For example, let's say if you are using sensitive information like race, gender, ethnicity, nationality, we also want to make sure we partner with our project team to conduct fairness assessment, to eliminate any kind of biases that may present advertently or inadvertently in our dataset, because ultimately what feeds into AI stays in AI.
Ojas Rege:
What's a specific example where the ambassador program helped you achieve that?
Milin Chhanechhara:
Ambassadors are truly at the forefront of every AI project. So for example, let's say when the project decides to have sensitive information or private information about employee, customer, or any sensitive information, that whole core functionality of assessment now fits into business group where they will be partnering with their own ambassadors to conduct this assessment. It doesn't come to us any longer because they are now proactively making an effort to make sure that the AI is trained on clean, balanced and well-governed data that is accurate, robust, against drift as well as far less likely for emerging decommissioning as well.
Ojas Rege:
You mentioned fairness assessments, anonymizing sensitive data, differential privacy. Can you walk us through a couple of examples of what those look like in practice?
Milin Chhanechhara:
So we have a lot of projects that are specific for human resources. So for example, if you are using sensitive information, we've actually built internally a tool called Lumen AI Ethics, which mandates the requirement for the fairness assessment. So that tool, based on the data that you fed to this tool, it's automatically going to generate a report that shows us the disparity metrics, the disparity allocation like, "Hey, you are training a model, but your model contains disparity between male and female." So that level of disparity is identified through this assessment tool. We also have a privacy assessment from the responsible AI standpoint. So for example, we identify how many data points do we need to de identify a person or individual in the group of data points.
And based on our suggestion, if we find a disparity or if we find key in animated metrics specifically designed for privacy assessment, we collaborate with our project owner team. In this case, if there is an HR or any other group, we collaborate with them to let them know, say, "Hey, there is a disparity in your data set. You might want to consider taking a data set that is more balanced." So AI could be trained to make the right decisions. So that level of partnership that we have enabled through AI governance, AI ambassadors, and the project owner's team is what really drives our AI-ready data for our AI governance.
Ojas Rege:
And in reflection, as you look back over all those months, what has been the biggest impact of the program?
Milin Chhanechhara:
Oh, one of my favorite part about AI ambassadorship program was we were able to break the silos. People were genuinely interested in sharing ideas, sharing resources. That I really loved seeing that.
Ojas Rege:
I'm going to make this a little personal now, going from the broader programs to Milin as a data scientist. So for you, when you look at others in the industry, your peers, not just in Lumen, what is their perspective on AI governance? How does a data scientist think of AI governance?
Milin Chhanechhara:
Yeah, and I've had so many similar conversation with my peers across the industry, and it's really challenging that right now they consider AI governance as a roadblock, a barrier. And that really breaks my heart because I think the way we have designed AI governance is to be truly a partnership with every single business group in our organization, being a partner, making sure that everyone's voice is heard. And I think that's where instead of being just a mere compliance checklist, I think they should start looking at more as partnership with their organization.
Ojas Rege:
So if you were giving a TED Talk, let's say, to a room of data scientists on this topic, what lessons from your program would you communicate to them? What lessons can be applied to data scientists more broadly?
Milin Chhanechhara:
One thing that I would truly advise them would be to embrace the responsible AI practices because it's truly a core and a foundation of not just the AI governance program, it's the foundation of everything that we represent as humans. It's not just about compliance checks or requirement from these regulations, data regulations. I truly consider responsible AI practices being a core and foundation of what humanity represents. If you embrace that, every single thing will come naturally to you.
Ojas Rege:
What really stood out to me in this conversation is that Lumen didn't try to centralize their way out of AI complexity. They distributed trust instead. When governance becomes reactive, it doesn't just slow down teams, it burns people out and good ideas die in the queue. Lumen recognized that scaling AI safely wasn't about adding more reviews, but about embedding governance where decisions actually happen. Here are a few more of my takeaways. Governance doesn't scale from the center. It scales through people.
By training AI ambassadors across roles, geographies, and business units, Lumen turned governance from bottleneck into a multiplier. Expertise moved closer to the work and oversight became proactive instead of reactive. Sharing knowledge is essential for effective AI-ready governance programs. The ambassador program didn't just improve compliance. It helped make teams smarter. It allowed them to make faster decisions by sharing practical applied knowledge that those teams could use right away. As a result, governance became an enabler of efficiency and effectiveness, not a constraint.
Every customer needs a thoughtful strategy around AI-ready data. Lumen treats data as a first-class governed infrastructure component, accurate, traceable, protected, and deliberately prepared for AI use. Embedding privacy, fairness, and quality checks early ensures the model doesn't quietly introduce dangerous risk later.
This is Trustonomy. And in this series, we're exploring what AI governance looks like, not just in theory, but in practice. If you've picked up something useful, share the episode and follow the show so you don't miss the next one. We'd love to hear what's working for you, what lessons you've learned, and what you want to hear about more in future episodes. Visit us at onetrust.com/podcasts/trustonomy. I'm Ojas Rege. Thanks for listening.