In today’s AI gold rush, the greatest risk isn’t failure—it’s building something powerful that does the wrong thing. In this TFiR clip, Jesse McCrosky, Principal Architect – GenAI at Egen, explains why AI governance must be a core pillar of any enterprise AI strategy. Drawing on both modern case studies and historical parallels, he breaks down the real dangers of misaligned metrics and unchecked optimization.
Not Just About Lawsuits—About Misalignment
Jesse explains that AI has unique risks—not just regulatory exposure, but the tendency to optimize for flawed metrics. He references the famous “Cobra Effect”: incentivize dead cobras, and people start breeding them. In AI, the same thing happens when we optimize for clicks or views—out comes bias, clickbait, and unintended outcomes.
Goodhart’s Law in Practice
The concept of Goodhart’s Law—”When a measure becomes a target, it ceases to be a good measure”—is especially relevant in AI, where systems amplify the incentives built into their design. “AI becomes very good at doing what you tell it to do,” Jesse warns, “but not necessarily what you meant it to do.”
The Call for Governance
Without deliberate governance frameworks, organizations are likely to see their AI investments fail—or worse, backfire. Governance isn’t red tape—it’s how you ensure AI aligns with business outcomes, user trust, and ethical integrity.





