As businesses race to embed artificial intelligence (AI) across operations, a crucial factor often gets overlooked: governance. In a recent episode of TFiR’s An Eye on AI, Jesse McCrosky, Principal Architect – GenAI at Egen, sat down with Swapnil Bhartiya to explain why AI governance is not just about regulatory compliance—it’s a foundational pillar for business success.
McCrosky argues that organizations serious about scaling AI need to “get their own houses in order” through robust governance processes.
The High Cost of Governance Gaps
The stakes couldn’t be higher. McCrosky points to research showing that a large proportion of AI initiatives fail to deliver business value, often due to inadequate governance. The consequences extend far beyond missed opportunities.
Consider Zillow’s cautionary tale. The real estate giant’s AI-powered property flipping venture, which used machine learning to estimate home values and flipping potential, collapsed spectacularly when the model failed to adapt to changing market conditions. The result? A $500 million loss in a single year, the shutdown of an entire division, and layoffs affecting 25% of staff.
“This is not about breaking the law,” McCrosky explains. “This is just an AI system that’s too risky, not being adequately managed and mitigated and ending up with very bad outcomes for the business.”
The Five Pillars of AI Risk
McCrosky’s governance framework centers on a comprehensive view of risk that goes beyond traditional compliance concerns. He identifies five critical risk categories:
Regulatory Risk: With AI regulation rapidly evolving globally, staying compliant is increasingly complex. Legal teams struggle to keep pace with new requirements across jurisdictions.
Operational Risk: AI systems that aren’t aligned with business objectives can actively harm operations, as the Zillow example demonstrates.
Competitive Risk: Organizations that lag in AI adoption face the risk of being outmaneuvered by more agile competitors achieving efficiency gains.
Reputational Risk: When AI deployments conflict with stated company values or public commitments, brand damage can be severe and lasting.
Societal and Environmental Risk: AI systems that negatively impact communities or the environment create long-term risks for sustainable business operations.
The Alignment Challenge
At the heart of many AI failures lies what McCrosky calls the “alignment problem”—the tendency for AI systems to optimize for metrics rather than intended outcomes. He illustrates this with a classic example from colonial India, where a government bounty on dead cobras led to cobra breeding farms rather than pest reduction.
“AI becomes very good at doing what you tell it to do, not necessarily what you actually want it to do,” McCrosky warns. This principle, formalized as Goodhart’s Law, manifests everywhere from content recommendation algorithms that prioritize engagement over user satisfaction to hiring tools that perpetuate historical biases.
Beyond the Bias Myth
One of McCrosky’s most provocative insights challenges the AI industry’s pursuit of “unbiased” systems. He argues that truly unbiased AI is impossible because it relies on real-world data generated by an inherently biased society.
“Nothing is unbiased and nothing is fair when it comes to data that comes from the real world,” he states. “We do not live in a just society. There’s unfairness everywhere, and so the data that’s generated by that process is also going to be unfair.”
Instead of chasing the impossible goal of eliminating bias, McCrosky advocates for systematic risk assessment and mitigation. He points to OpenAI‘s early handling of DALL-E’s representational issues as an example of the right approach—identifying potential harms and implementing targeted interventions, even if imperfect.
Governance as Innovation Enabler
Contrary to common perception, McCrosky argues that governance doesn’t stifle innovation—it enables it. The key is embedding governance thinking throughout the AI lifecycle rather than treating it as a final checkpoint.
“AI governance can support innovation,” he explains. “When we want to build a system, we want to think about how we want to design it. We want to understand how we’re going to measure success. We want to envision the world that we want to create.”
This requires creating “sandboxes” where developers can experiment freely in early stages while ensuring governance considerations are built into the foundation of every AI initiative.
The Transparency Imperative
McCrosky emphasizes transparency as a cornerstone of effective AI governance, arguing that public trust requires understanding. “I think that AI does have the potential to change our world and our society and our economy,” he notes. “Everybody has the right to be involved in conversations about what sort of world we’re building.”
This doesn’t mean revealing proprietary algorithms or sensitive data, but rather providing high-level transparency about what systems are being built and why.
Building Governance Culture
Perhaps most importantly, McCrosky stresses that effective governance requires more than processes—it demands culture change. “AI governance very easily becomes a box-ticking exercise,” he warns. “Without a culture of actually caring about getting this right, it’s very difficult to end up with good outcomes.”
This cultural shift requires upskilling teams, clarifying accountability, and fostering genuine commitment to responsible AI development across the organization.
The Path Forward
For organizations embarking on AI governance journeys, McCrosky’s advice is straightforward: “Get a bunch of smart people in the room and have them think about what can go wrong and what to do about it.” While this may sound simple, the key is systematizing these conversations into robust, repeatable processes.
As AI continues to reshape business and society, the organizations that thrive will be those that view governance not as a constraint but as a competitive advantage. In McCrosky’s words, effective governance ensures that AI initiatives realize their potential while avoiding the costly pitfalls that have claimed others.
The choice is clear: embrace governance as a strategic enabler, or risk becoming another cautionary tale in AI’s rapid evolution.
Edited Transcript
Swapnil Bhartiya: Today, we are diving into one of the most critical and often overlooked aspects of AI success: governance. As organizations race to integrate AI into their operations, the need for clear, responsible oversight has never been more urgent. Joining me today is Jesse McCrosky, Principal Architect for GenAI at Egen. Jesse brings deep insights into how effective governance isn’t just about compliance—it’s a strategic lever for innovation, risk management, and long-term value. In this episode, we will explore why governance is so fundamental to AI strategy, how it shapes fairness and transparency, and what it really takes to balance innovation with ethical responsibility. So without further ado, let’s go and talk to Jesse. Jesse, it’s great to have you back on the show.
Jesse McCrosky: Thanks for having me, Swapnil. It’s a pleasure to be here.
Swapnil Bhartiya: Why is AI governance so important to AI strategy?
Jesse McCrosky: I think we’re in an era—and I think some Forrester research I’ve read backs this up—where businesses are starting to recognize that if they want to scale and innovate with AI, they need to get their own houses in order. This means establishing robust AI governance processes. I think we’ve seen and read in the headlines a lot of cases where these processes were not in place and things did not go well. Not just the dramatic cases of lawsuits and fines and that sort of thing, but also just cases where a large proportion of AI initiatives do not succeed and are not well aligned with business value.
Because AI really is a special kind of beast. It really has unique risks that require robust governance processes in order to ensure value. I can tell you a brief example. There’s sort of a canonical story in the AI ethics community that maybe some of your viewers have heard about: if you task a very intelligent AI with making as many paper clips as possible, maybe this AI is going to become too powerful, and it’s going to start turning everything—even us—into paper clips. It’s a sort of dystopian sci-fi story, so it’s a little bit silly, but it also hints at one of the really important issues in AI governance, which is the idea of alignment and how AI becomes very good at doing what you tell it to do, not necessarily what you actually want it to do.
This idea is operationalized in something called Goodhart’s Law, which is kind of like Murphy’s Law for techies, in a sense—how things will go wrong. To give an example about cobras: there was a story about how in India, at some point during the colonial era, the government decided there were too many cobras and they needed to do something about this. So they offered a bounty for people to bring in dead cobras, and they thought this was going to solve their problem.
So what we have here is a value that we want to achieve, which is fewer cobras, and at the same time, we have a metric that we’re applying optimization pressure to, which is people killing cobras. You think this is going to work out, but what Goodhart’s Law says is that because you’re applying this optimization pressure, people are going to find a way to cheat. That’s exactly what they did. We ended up with entrepreneurial people starting cobra breeding farms where they grew lots and lots of cobras that they could then kill and take in for bounties. It didn’t solve the problem at all.
This sounds a little bit far from AI, but the same sort of problem occurs in almost any kind of AI system. If you think about content recommendation where video recommenders—we want to show people videos that they’re going to enjoy. It’s really hard to measure if people enjoy something, so instead you just measure: do they watch it? Do they click on it? You end up showing all this clickbait, all this really extreme content and that sort of thing, and we end up with these sorts of problems just all over the place.
Swapnil Bhartiya: What are the key elements of AI governance?
Jesse McCrosky: I like to think about AI governance in terms of risk. I think that you need to have a broad conception of what risk is, otherwise you end up with a very myopic kind of picture. But if we think broadly about risk—and when I give presentations, I have a slide about this framework of different sorts of risk—we talk about regulatory risk, which is fairly obvious in a sense. We’re in an era where AI regulation is very quickly being established and emerging, and it’s tough for the lawyers to even keep on top of it.
But as well, we think about operational risk, which is where an AI system may not be aligned with business value and may, in fact, cause harm to the business. We can also think about the risk of missing out if your competitors are using AI and you’re not, and they’re able to achieve efficiency gains. This is another sort of risk that needs to be considered.
Then, as we get more into the responsible tech realm of things, we can think about reputational risk and then social or environmental risk. Reputational risk is where my business, for example, might have made commitments to its customers, to the public, to its contractors, to whoever, saying, “These are our values. These are the principles we’re going to uphold.” If our deployment of AI is going to come into conflict with those, we face reputational risks. Similarly, if the company that I’m working for is doing something that’s not good for the world somehow, then we have these societal and environmental risks.
So once you have a broad enough conception of risk, it’s really about risk assessment and risk mitigation. I used to joke that all you really have to do is get a bunch of smart people in the room and have them think about what can go wrong and what to do about it. When it comes down to it, there’s some truth to that, but what AI governance does is systematize and make more robust these processes.
So you have things like actually getting clear on how, as a business, we’re going to operationalize our values and principles. We’re clear on who is accountable for what part of the process. We’re clear on the data governance principles—that’s a field that’s been around a little longer, and we can learn from their lessons. How are we going to work through a model lifecycle and make sure that after something’s deployed, we don’t just stop paying attention to it? There’s the documentation and transparency side of things, which is really important. And finally, focus on the people.
I think the upskilling and having the necessary expertise and creating the appropriate culture is so critical. AI governance very easily becomes a box-ticking exercise, and without a culture of actually caring about getting this right, it’s very difficult to end up with good outcomes.
Swapnil Bhartiya: Can you also talk about what is the scope and role of AI governance when organizations are building their AI systems?
Jesse McCrosky: Governance, in a general sense, is about making sure things go the way you want them to go, to put it simply. When an AI initiative is being designed, it should be designed for some purpose. There should be some sense of “this is what we want the world to look like because this project succeeds.” So what I would say is AI governance is a process to ensure that that potential is going to be realized, and also that we don’t end up with unintended consequences like breaking the law or ending up with lawsuits or whatever else.
What I would say is that with governance, people most often think about compliance—they think about fines, they think about lawsuits and everything like that. But it’s also very important to think about the business outcomes.
It’s a little bit of an older example, but still a very good one. I think Zillow had an AI system they used to estimate the values of properties and the potential for flipping. So they had a subsidiary business where they were buying and selling homes and turning a profit. I certainly don’t want to cast any shade on anyone there—this was very early in the understanding of AI governance. But let’s just say, for example, they probably didn’t have an adequate governance process in place.
What ended up happening was their model was not actually robust enough to deal with changing market conditions, and they ended up with quite a disastrous situation where they had a lot of houses they had to sell at a loss. I think I read they lost something like $500 million in just one year. They shut down that division, they laid off a quarter of their staff. So this is not about breaking the law. This is not about anyone getting angry at you and suing you. This is just an AI system that’s too risky, not being adequately managed and mitigated, and ending up with very bad outcomes for the business.
Swapnil Bhartiya: How does AI governance help prevent bias and ensure fairness in AI systems?
Jesse McCrosky: Bias and fairness is a topic close to my heart, because I think a lot of people get it wrong, and you can probably find a lot of people that want to sell you “unbiased models” and “fair data” or whatever else. I would definitely be a little bit skeptical of those folks.
Nothing is unbiased and nothing is fair when it comes to data, at least that comes from the real world. When we think statistically about data, we think about data generating processes. The source of AI we’re talking about today—the data generating process—is the real world. It’s people, it’s society, and like it or not, we do not live in a just society. There’s unfairness everywhere, and so the data that’s generated by that process is also going to be unfair, and there is nothing you can do about that.
To give a simple example, think about medical data. We can say that the data generating process is biased. Let’s say we want to investigate the effectiveness of certain treatments for different people. There’s going to be inequities in access to these treatments, and certain sorts of people—usually poor people—are going to have less access to the treatments, and so are going to contribute less to the data. So that’s one sort of bias.
Then we can talk about the data collection process. Maybe certain clinics have good computer systems and networks so we can get their data, and others don’t. So we’re going to end up having bias where we’re collecting data mostly from clinics in richer areas that have better access to the networking equipment.
Then we can talk about data curation. Let’s say the researchers are speaking English, and they have trouble dealing with other languages, so when they’re curating data, they’re going to say, “Maybe to make our lives easier, we’re only going to use the English data,” and so there’s more bias. And then the model itself can increase the bias as well.
So no matter what you do, you’re going to have bias in your data. I’ll return to my point before about how AI governance is about risk assessment and risk mitigation. We understand that our data and our models are biased. What are we going to do about it to ensure that we don’t create harm in the real world?
If you’ll forgive me, one more brief anecdote. There’s a great story from DALL-E from OpenAI, if you’re familiar with it. It’s an AI model that allows you to generate images from prompts. So you can say, “Show me pictures of cats dancing,” and it’ll make pictures of cats dancing or whatever. But they had problems with bias in the early days. If you asked for pictures of lawyers, it would give you eight pictures of white guys. If you asked for pictures of flight attendants, it would give you eight pictures of Asian women, for whatever reason. So clearly, there’s a problem here.
What happened was OpenAI put out a blog post—super brief, like one page on the screen—saying, “Hey, we had this problem. We fixed it. Don’t worry. All good now.” And it was true. If you asked for pictures of lawyers, some of them would be women, some of them would be different ethnicities, whatever else. But they said nothing about how they fixed the problem.
This is the really fascinating part. Somebody in the community, some anonymous soul, came up with a great hypothesis and a way to test it. What they did was they asked DALL-E to generate pictures of “a person holding a sign saying”—and that was all they asked for. What they got was pictures of people holding signs, and some of those signs said “woman,” and some of those signs said “Black,” and some of those signs said whatever else.
So what they realized was that all OpenAI was doing was saying, “Hey, if this is a prompt that has something to do with showing people where bias might be an issue, just randomly append the name of some underrepresented group to the end of the prompt.” The community just ripped into OpenAI about this. It was really funny, actually—a lot of good witticisms on Twitter, as it was called back in the day.
OpenAI didn’t really respond to that, but the point is that I’m actually not as hard on OpenAI as the rest of the community in this case, because I think what they did was at least pointing in the right direction. The point is that I don’t think you can tell me, if you have eight pictures of lawyers, how many of them should be women? How many of them should be men? How many of them should be white or Black or whatever else? There’s no right answer to that question. Should it represent the human population or the population of lawyers, or should there be more women to counteract past injustices? You can debate this stuff forever. There’s no easy answer there, and there’s no such thing as an unbiased model in this case.
So what OpenAI did was they did a risk assessment. They saw, “Hey, we’re showing very non-representational pictures here. This could end up in some campus brochure, and people are going to think like, ‘Hey, only white guys can be lawyers’ or whatever. This is a harm that we see a risk of.” So what can we do to mitigate this risk? And they came up with a very simple solution to that.
I’m not saying that what OpenAI did was the final perfect answer, but I would say that they were following the right blueprint in the sense of AI governance, and that this kind of work is continuing. Of course, Google got into a lot of trouble over that kind of thing later. There’s still a lot of work and refinements to happen, but it shows the right kind of thinking and how you actually deal with bias.
Swapnil Bhartiya: How do you balance innovation with the need for responsible AI governance?
Jesse McCrosky: Good question. What I would argue is that it’s not something that needs to be balanced. I think AI governance can support innovation. The way to see this is really design thinking, where when we want to build a system, we want to think about how we want to design it. We want to understand how we’re going to measure success. We want to envision the world that we want to create by building the system. When we have this sort of thinking, then we can understand how design processes and governance processes can come together.
The one place that I think balance is needed is that there’s this idea of sandboxes, where, earlier on, developers, AI architects, whoever else, should be free to experiment and to play and to try different things without worrying about going through a lot of documentation or whatever else. I think this is important to support, but at the same time, AI governance is something that has to be a part of the entire AI lifecycle. It’s not something you can tack on at the end.
So I think the real balancing act is: how do you create processes that support people in early stages of innovation to better understand what they’re trying to accomplish and better understand how they’re going to measure success? When you have a process that can support that, then you have a process that will support innovation.
Swapnil Bhartiya: How do you ensure AI systems are aligned with ethical principles and societal values, and also these values—they change depending on the region. How do you ensure that?
Jesse McCrosky: Great question, Swapnil, and there’s a lot of philosophical implications there that we could get into in five or six hours over a couple of beers. But I’ll give you the brief answer.
I think the first part is it’s important for a business to clarify its own ethical principles and values, and those should be respected above all. At the same time, it is important—businesses often work in very international, global contexts, and it’s important to understand the cultural norms that you’re working in. So it may be necessary to have different variations of a product for different audiences. That’s another aspect of governance.
But what I’ll say for now is that once you’ve clarified what values and principles are important to you, it really is a continuous process. It’s something where you have to engage from the very beginning of the design phase, through execution, through deployment, through monitoring, all the way through.
I think one important part of this is engaging with stakeholders that are representative of the sorts of people that are going to be impacted by your products. So this can mean having diverse teams so you understand the different perspectives internally. But also it can mean external stakeholder conversations. This doesn’t have to be anything really heavy lift. It can be as simple as ordinary UX research, where you’re going to be talking to different sorts of customers that might be using your products and making sure you understand their perspectives as well.
The other thing I would say—and this is a little bit off topic, but I think it’s important enough to include—is that transparency is often trumpeted as one of the important aspects of AI governance, and that’s for a very good reason. I think that AI does have the potential to change our world and our society and our economy. I would say that everybody has the right to be involved in conversations about what sort of world we’re building.
So if we want everybody to be able to participate in that conversation, we need to make sure everybody has some idea of what’s being built and how it’s being designed, and why it’s being designed the way that it is. That’s why I think transparency is so important—to give people the information they need to be active participants in conversations about what sort of world we’re building.
Of course, businesses have secrets. Businesses have confidential data. We don’t have to give away the whole farm, but it is important that at a high level, at a design perspective, we are honest and upfront with the public about what we’re building and why we’re building it.
Swapnil Bhartiya: Jesse, thank you so much for joining me today and talking about this important topic that is often overlooked. Thanks for the great insights, and I look forward to having this discussion with you again. Thank you.
Jesse McCrosky: It’s really been a pleasure. Thank you for having me.





