Cloud Native

The Real Building Blocks of AI Governance

0

AI governance isn’t just about regulation—it’s about anticipating what can go wrong, and making sure the right people are accountable when it does. In this TFiR clip, Jesse McCrosky, Principal Architect – GenAI at Egen, lays out a pragmatic, risk-centered approach to AI governance that extends far beyond legal compliance.

Risk Comes in Many Forms

“We need a broad conception of risk,” says McCrosky. Beyond regulatory risk, businesses need to account for:

  • Operational risk (misaligned AI outcomes)
  • Reputational risk (AI conflicting with company values)
  • Social/environmental risk
  • Competitive risk (falling behind)

Each of these requires a distinct lens—and collectively, they inform a stronger governance posture.

From Checklists to Culture

McCrosky emphasizes that governance can’t be performative. Effective frameworks must:

  • Define who is accountable and for what
  • Maintain model lifecycle oversight post-deployment
  • Include transparent documentation and decision-making
  • Prioritize upskilling and AI literacy across teams

“Without a culture of actually caring about getting this right,” he says, “it’s very difficult to end up with good outcomes.”

Why AI Governance Is the Innovation Catalyst Every Organization Needs

AWS Outposts Reality Check: Why Your Business-Critical Apps Need More Than “Cloud in a Box”

Previous article

Akamai Tackles API Security’s Biggest Challenge: Bridging the Gap Between Code and Production

Next article