Guest: Paul Merrison (LinkedIn)
Company: Tetrate
Show Name: An Eye on AI
Topic: Agentic AI, AI Governance
Autonomous AI is no longer a lab experiment. Modern AI agents can make independent decisions, call external tools, and even modify systems — capabilities that introduce a new class of risks beyond traditional AI models. Recognizing this shift, Tetrate has collaborated with the Fintech Open Source Foundation (FINOS) to expand the FINOS AI Governance Framework to address the security and compliance challenges of agentic AI.
Paul Merrison, Head of Information Security and AI Governance Lead at Tetrate, explains that while earlier frameworks were designed around retrieval-augmented generation (RAG) chatbots, they did not account for AI agents’ ability to act. “The big difference between an AI agent and a traditional chatbot is that it can affect change — it can modify other systems or even talk to other agents,” Merrison said. “That new capability of effecting change is what really drove us to think hard about the security of these systems.”
Expanding Governance for a New Class of AI
Tetrate’s work began by developing a reference architecture for agentic AI — identifying the unique risks that come with memory, tool use, and autonomy. Traditional controls were no longer sufficient because agents can persist knowledge, execute tasks independently, and chain multiple tools to complete objectives. These new behaviors introduce failure modes like data poisoning, memory manipulation, or unauthorized actions that can propagate at machine speed.
From this analysis, Tetrate and FINOS introduced six new mitigations tailored to these risks. The additions provide actionable guidance for enterprises implementing AI systems that can operate beyond human supervision. Merrison noted that the controls are “action-oriented,” designed to be implemented by teams at different stages of their AI maturity without stalling innovation.
From Policy to Operational Governance
Governance, Merrison stressed, has to move beyond documentation. “We’ve seen companies with binders full of AI policy PDFs, but governance that isn’t enforced in production doesn’t protect anyone,” he said. That philosophy underpins Tetrate’s own Agent Operations Director — a platform built to make AI governance operational.
The tool provides visibility into how agents behave, detect anomalies, and validate actions. It can spot prompt manipulation, prevent unauthorized tool use, and ensure agents operate within defined boundaries. “We developed it to make it easy for enterprises to achieve governance without slowing down innovation,” Merrison explained.
Blueprints for Regulated Industries
The expanded FINOS framework also reflects lessons learned from highly regulated sectors like financial services. Merrison said that by contributing domain expertise and reference architectures, Tetrate and its partners made compliance easier for companies that must navigate overlapping regulations such as the EU AI Act and NIST AI Risk Management Framework. “They don’t have to develop all that expertise internally — we’ve already done the work,” he said. “Now they can adopt a blueprint that represents cross-industry consensus on best practices.”
Preventing Agent Hacking and Misuse
The collaboration also integrates detailed threat modeling. With AI agents, new attack vectors appear — prompt injection, lateral movement between systems, and misuse of toolchains. Merrison explained one scenario: a malicious actor could trick an agent into reading sensitive information from a calendar and then emailing it out. “We specifically built mitigations for tool manipulation — controls that keep agents from executing sequences that lead to data leakage or unauthorized actions.”
A Scalable Path for Enterprise Adoption
When asked how CISOs should approach AI governance within the first 90 days, Merrison recommended a practical, incremental plan. “Start by knowing what you have,” he advised. “In many organizations, employees are already using AI tools like ChatGPT with company data. Map that usage first. Once you understand what’s running, you can apply more advanced controls like hallucination detection, tool validation, and behavioral monitoring.”
The framework, he added, was built to grow with organizations. “It will meet you wherever you are on your AI journey — from initial discovery to production-grade governance.”
Beyond Finance: A Model for Every Industry
Although designed with regulated industries in mind, the new safeguards are applicable to any enterprise deploying AI agents. Merrison compared this trajectory to the early days of cloud adoption. “Financial services set the standards for compliance in cloud computing, and eventually everyone followed. The same will happen with AI governance — these frameworks will become the norm for anyone who cares about trust and safety.”
As AI systems continue to gain autonomy, frameworks like the one expanded by Tetrate and FINOS will become the foundation for safe, enterprise-ready deployment. Governance can no longer live in static documents — it must be embedded directly into the tools and systems running AI in production.





