Ojas Rege:
AI moves fast, arguably faster than any technology we've ever seen. But as they say, you ain't seen nothing yet. Can we predict the future of AI? Well, we can try. But the more actionable approach is to build systems for governance that can adapt to whatever comes next.
In this final episode of the season, I'm joined by Nick McGuire, who advises C-level leaders on emerging technology. He shares how to future-proof AI Governance by focusing less on being ahead of the curve and more on staying agile.
Nick McQuire:
I remember it was about a year ago. It was in the States. I was meeting with customers and I turned on CNBC.
Speaker 1 – anchor voice:
One thing is clear, the impact will…
Ojas Rege:
Nick McQuire is one of the people I turn to when I want to get a sense of what's on the technology horizon. He's advised C-level executives in emerging technology and enterprise innovation for over 20 years. Nick's based out of London, England, but on this visit to the US, he noticed how the mainstream media coverage around AI was shifting.
Nick McQuire:
CNBC a number of years ago, you wouldn't really get loads of tech, but tech is now, and particularly AI, is at the front line of this. They had a researcher on and the headline was, "There's a 50/50 chance that AI is going to end all of humanity." And that was the topic of conversation on Power Lunch on CNBC, which I thought was peak fear.
Ojas Rege:
So far, humanity appears to be hanging in, but how do we operate in that fearful environment and how do we govern AI? If peak fear was where we were a year ago, where are we now, and where are we going to be in two years, five years, or even 20 years? One of my favorite things about Nick, beyond his deep knowledge, is that he's got a real sense of empathy for how transformative technology impacts organizations and people.
Nick McQuire:
You've got new categories now emerging, like small language models, domain-specific language models, of course this convergence happening with agentic AI and the impact that's going to have on software as well. And then you've got the real impact that quantum technologies, quantum computing is going to have on AI in the future. So all these are really, really positive opportunities, but you also have these challenges, like the hype and the bubble that exists. And I would say predominantly that leaders are facing this fear. They're jumping in, but there's a lot of contradictions, and I think it's difficult to navigate that.
Ojas Rege:
It is difficult to navigate. So today, Nick and I are going to look ahead to where AI could be going and discuss, yes, the fear, the risks, and the opportunities, but also give you tips on how to build your governance framework to prepare for that future now. I'm Ojas Rege, General Manager, Privacy and Data Governance, and this is Trustonomy, an original podcast from OneTrust. This season we're covering AI-Ready Governance, the mindsets, frameworks, and strategies business leaders are using to set themselves up for responsible innovation and operational excellence. Nick, how would you describe your field of work?
Nick McQuire:
I have spent well over 20 years or so in the technology industry, 10 of which have been formerly as a technology analyst. I'm going back into that industry after doing a number of years working for Microsoft and a few other technology companies. The kind of pattern of the work that I've been interested in throughout that life cycle is in the field of emerging technologies. So how do you drive rigor? How do you guide organizations around what we call horizon 2 technologies, that can impact current business models, as well as take organizations into new fields of discovery and new fields in terms of their economic and business approaches. But these are technologies typically that are under the radar for organizations, so it's, how do you help organizations adopt these technologies in a safe and secure and measured way as well?
Ojas Rege:
So you're talking to a lot of business leaders, a lot of AI governance leaders. Where are they at now?
Nick McQuire:
Yeah, I think you've got a real mix, and it in some respects comes down to the industry that they're focusing on and how knowledgeable they are in AI, and how they see the impact that's going to have on their core business, whatever that core business is. And it's really kind of three phases, and it's scenario-based. You have productivity scenarios, which is really the current phase that the industry's in and customers are in right now. So how do I reduce friction? How do I reduce these tedious tasks using generative AI? And what we see is that there's now this move to this next phase where, we called it augmented cognition, which is very much about the way in which you use generative AI to co-reason. We use this term co-reasoning. So how do you interact with AI? How do you guide AI strategically in a reasoning sense to help solve domain problems?
And then very quickly, you then move into what we see as this third phase, and it's not necessarily linear as well, but it's almost value-driven. And this third phase is all about accelerating discovery. How do you advance scientific R&D, in particular? And I think most customers that I've experienced interacting with over the last 12 to 18 months would probably say to us, hand on heart, that they're not that mature with the technology. They're still in that kind of experimentation phase, which is why in many respects, many aren't succeeding, in the sense of getting those returns or measuring its value.
Ojas Rege:
Let's pause here for a moment and really take in what Nick's saying. Many of the most advanced AI organizations, the ones who can see the promise in AI on quantum computing to drive their research and development, even they don't feel ready or like they've got this under control. So if AI governance and measurement feels messy to you, you're in good company. Nick explains why.
Nick McQuire:
I think generative AI took a lot of organizations by surprise, including Microsoft. There's a lot, as we know, to deal with with AI at this current moment of time, which since some organizations keeps them from looking around the corner looking beyond. And that is something that I think some organizations really learned with the rise of generative AI and how quickly that arrived.
Ojas Rege:
And it did come fast. Certainly changed people's perspective. You mentioned looking around the corner. When you think about AI a year or two years from now, three years, what will have changed?
Nick McQuire:
I think we'll start to see more scenarios where we're seeing meaningful value, we're seeing meaningful transformation, of not just a simple use case, but a broader organizational shift. I think we'll hopefully start to see more breakthroughs in the fields of science, which will show the world the value that AI can have. You're starting to see this discipline of quantum in AI arrive now, which I think is really fascinating. It's primarily, and this is more of a quantum computing discussion, but it primarily manifests itself in the field of scientific AI models, because that's where quantum computing is really going to have the most significant impact. But the discussion isn't just about using quantum as a compute resource to power AI. It's much more about how do you use quantum simulations to enhance existing classical AI models? So quantum computing can be a very valuable source of synthetic data as well. So the other piece to this opportunity is around how you leverage synthetic data, and this is where quantum will be one aspect of it as well.
Ojas Rege:
Nick, from your experience, the companies that are dipping their toes into these opportunities, how are they approaching it?
Nick McQuire:
There's a business approach and then there's kind of like a technology approach. The business approach is, "Okay, I see the potential of this early-stage AI capability. I understand where it's made pretty important impact with other organizations, perhaps in adjacent fields. I'm not yet sure where this is going to take me, but we're believer in it. And so we want to get going early. We want to get our hands dirty and when we want to start, kind of warts and all." And that is the world of private preview. That is the world of products that aren't generally available, they are being built a little bit on the fly, and the customers are getting their hands dirty early on to help shape, but also to get realized value. So I think that's the business approach. It's to say, "Okay, we're going to the investment, but we need to do it to learn and to get our hands dirty on it, even though the outcome may be uncertain."
Ojas Rege:
How about the technology approach you mentioned?
Nick McQuire:
The technology approach is also, I think those that are not doing this in silos. They, to an extent, see the importance of having almost like a platform approach to future innovation. And by a platform, I mean it's extensible. So if I want to start on HPC and running AI models on an HPC-based environment today, I can do that. But if I want to actually run maybe some quantum experiments alongside those workflows or as part of those workflows, I can use that platform capability to experiment with a new modality. And so that extensible platform approach to future technologies is really important, so they're not building these things in silos.
That for me, is really where I think the ones that get it are setting themselves up for success in the future, because they're starting early, they're getting their hands dirty, but they have the right technology approach to be able to embrace things as things change. And that really does require a little bit of risk because you're leaning on your technology provider, you're leaning on your cloud provider on some respects for this. But the flip of that is you have the agility and you're not hamstrung by an approach where we have to sweat the investments or the assets that we've currently invested in.
Ojas Rege:
That idea of platform extensibility is such an important one, especially in a landscape where the underlying AI technologies are evolving almost daily. Having that flexibility keeps you from getting locked in, but of course, the platform is only as powerful as the data that fuels it.
Nick McQuire:
There's quite a lot of work that customers need to do to get their data ... The plumbing of their experiment data, sitting in different areas. Some of it in, obviously unstructured data environments, some of it in electronic lab notebooks, others in other formats. So that plumbing to get the data into a place where you can apply an LLM capability over top of that, is quite a significant task. So that's where a little bit customers underestimate where they get unstuck, because they see the ability for this stuff to really add value from a discovery perspective. But then the lack of knowledge in terms of what it will take to get that data ready to achieve that value, and the time scales associated with that, I think can throw customers off.
Ojas Rege:
Let's move this out of the theoretical. Take us inside a horizon 2 project. Could you give me a real-world example?
Nick McQuire:
Yeah, sure. I had a front-row seat to some of the work that Microsoft did with the U.S. Department of Energy. They've been working with the branch called the Pacific Northwest National Laboratories. And this is all about, "Okay, how can we understand the impact that AI is going to have on discovery of new materials that could replace lithium and dual-ion batteries?"
Ojas Rege:
There are a lot of reasons to look for ways to make better batteries. Lithium is already a relatively rare material, so the regions that can control and process it have an advantage, but mining lithium is water and energy-intensive, and traditional lithium-ion batteries can pose safety issues. Still, the U.S. Department of Energy expects the demand for lithium to rise five to 10 times by 2030. To find a replacement, research teams had been working through the catalog of roughly 250,000 known materials. The challenge was that within that relatively small universe of materials, none were a perfect fit to replace lithium in batteries, but AI changed that scale entirely.
Nick McQuire:
There's a couple of AI models that have been produced by Microsoft Research that actually generate new molecules. So you specify the specific properties and it will come up with a huge list. Immediately out of the gates, we have 32.8 million candidates to start. So that was the surface area funnel of discovery. Now, obviously they're not all going to be relevant, so there's a process of whittling that down. So you can use AI to speed up the process. Pacific Northwest Natural Laboratories using AI, essentially went from 32.8 million candidates of discovery, multiple X over what was known at that particular time in terms of human knowledge, down to 18 candidates that could get synthesized into a lab in a process of a week.
Ojas Rege:
One week to go from 32.8 million to 18. Let that sink in.
Nick McQuire:
Now, the key was, in the laboratory work they had one successful candidate came out of that batch of 18. What ended up happening is that they produced a working prototype. And Microsoft has published the images of this working prototype. It powers a digital alarm clock. So they went from 32.8 million candidates of discovery down to a working prototype, and that took them nine months. So this replaces five-plus years of R&D time in terms of the traditional cycles for them.
Ojas Rege:
There's still a lot of work to be done in testing and productizing, but the possibilities are astounding.
Nick McQuire:
And that will hopefully change the narrative away from this kind of fear-driven approach, towards actually, "Look what this can do" from a discovery perspective, and move it to a much more of a value-driven approach and awareness of where that transformation exists. I think we don't have enough conversations about where the real transformation is happening. It's not a question of where it will happen. It's like, where has it happened so far? Give us some evidence, case-based evidence, and I think this is a really good one in that sense.
Ojas Rege:
It's you think to yourself, how will the world change? It's like a feast of imagination potentially. But what are the implications of all of this for privacy, for trust, consumer trust, and risk?
Nick McQuire:
Yeah, really important points, because the more you put AI at the heart of your business, at the heart of your IP, really ... R&D-based AI, which is what I was highlighting earlier, is really the core of the intellectual property driver of the organization, of many organizations. You really open up those risks. So I would say this is a massive topic of discussion, because the generative AI is there and we've seen the impact that it can have on manufacturing industries. 95% of all products manufactured have roots in these fields of chemistry and materials, etc. But I think the big, big thing that customers are struggling with is intellectual property and how do you protect it? The risk of DLP, and if you're bringing your intellectual property in terms of prompting AI, where is that intellectual property going? Is it training other models? Is it protected in the sense of, I can control the metadata fine-tuning these AI models, or indeed I'm exposing my data, which is based on years of experiments, right?
Ojas Rege:
That's really, really valuable data. How are leaders approaching that risk?
Nick McQuire:
I think the conversations have really shifted from, okay, yes, I want the protection of my inputs. I want a much more clarity around the explainability of how these models are trained and the data sets that are used to train these models, as well as I want better transparency in terms of the decision-making of these high-reasoning systems as well. So that is something that the industry's got to work on, and I think that really ultimately comes back to some of the core fields and tenets of having proper AI governance. I just think it's just more important at this stage of the game for leaders if they really want to go in on this. And then when you then consider the fact that you've got other departments in the organization pursuing approaches to generative AI, but they're disparate from what's happening in R&D, I think the smart organizations now are saying, "Okay, we need a coordinated approach across this, across the board."
Ojas Rege:
Well, Nick given that every company is coming up this maturity curve and everyone has a disparate approach. If you are a Governance leader, how do you prepare for the future? How do you make your our approach AI ready because the speed of things coming at you is pretty fast?
Nick McQuire:
Yeah, I think it's important to be on top of what's around the corner. So it's about governance not just being reactive to things today, but being curious and proactive about things that are coming tomorrow as well. And there's lots going on, everything that's directly related, whether it's disinformation security. You look at security, post-quantum cryptography is a real concern for organizations in highly regulated industries. So that emerging field isn't just an outsider, outlier space now, and so the idea of starting there as part of the process for introducing new capabilities as opposed to being reactive.
And often, what I've seen as well is customers thinking just in the context of compliance, that kind of box tick approach to compliance, and it's confined to AI as opposed to, "Okay, how do we think about this all up?" So, thinking that is much more about responsible innovation, particularly when we think about how quickly AI landed and generative AI particularly landed, and being aware that actually there'll be more coming down the line. So I think it will change, but I do feel that it's in the minority of organizations at this current moment of time, to no fault of the organizations because there is so much keeping them awake at night when it comes to AI at this current moment of time as well.
Ojas Rege:
It's not about governance as compliance, it's about governance as foresight. It has to be proactive, adaptable, and designed to evolve with the technology so we can keep pushing forward responsibly. Most organizations are still in the early compliance focus stage of AI governance, patching holes, keeping up with regulations. But as they mature, governance can move beyond checklists towards scalable functions and ultimately using AI strategically to enable the business. Let's end on this, if you're talking to a CEO or director or a board, not necessarily the AI governance leader, what's your advice to them for preparing for the future?
Nick McQuire:
Use it. Use it. Use it as often as you can. Experiment, as in, individually experiment with it, and don't just use one tool. Use as many as you can. And I would start, depending on the level of maturity of that, but I would start with seven or eight of the big LLMs that are out there. Try to use it as much and as often as you can within the guidelines, obviously within the governance of your organization. Familiarity will breed the insight and the knowledge to understand how to apply. Yeah, don't just rely on guidance and advisors and the core team. Get your hands dirty.
Ojas Rege:
If you're feeling a mix of excitement, fatigue, and fear after that conversation, you're in the right place. We started talking about fear, but I hope that hearing from Nick today helps you tune in to what's possible if you move in the right AI governance direction early. Specifically, here's what I hope sticks with you. Future proofing starts now and it's rooted in good governance. That can feel overwhelming because things are moving fast, and tomorrow they'll move even faster. But the organizations that get ROI from AI are already looking at that horizon. Next, good governance is proactive, not reactive. To Nick's point, it's more than rules or checklists, it's the foundation for whatever comes next. It's about giving organizations the ability to adopt new technologies in a risk-informed way without locking them into one path.
And you have to build agility into the model. I was recently talking to a customer who was worried about lock-in. They said, rightfully, that with AI evolving so rapidly, any technology investment they make today is going to be outdated in just a few months. I'm hearing that a lot, and it leaves businesses in a difficult spot. In a world where innovation accelerates constantly, governance must float on top of technology decisions. You can't bolt on a framework after the fact, and it can't be static. It has to be agile, just like the systems it's guiding. Lastly, if you want to figure out how to do all of this, get your hands dirty. If you're in a leadership role, this is the moment to roll up your sleeves. Don't just oversee AI, use it.
This is Trustonomy, and this series is about what AI-Ready Governance looks like, not just theory but in practice. If you’ve picked up something useful, share the episode and follow the show. We'd love to hear what's working for you, what lessons you've learned, and what you want to hear more about. Visit us at onetrust.com/podcasts/trustonomy. I'm Ojas Rege. Thanks for listening.