AI Infrastructure

Why Your AI Agents Are Stuck in Pilot Hell, And What to Do About It | Marie Forshaw, CData | TFiR

0

Guest: Marie Forshaw, SVP of Product Marketing at CData

Ninety percent of AI proof-of-concepts never reach production. The model isn’t the problem. The prompt isn’t the problem. The data is. When agents can’t access the systems they need, can’t interpret what they’re reading, and operate without governance guardrails, even the most ambitious AI investment stalls. For enterprises that have already committed millions to AI transformation, this isn’t a minor technical hurdle. It’s a business-critical failure point.

Marie Forshaw, SVP of Product Marketing at CData, joined TFiR to explain how the company is addressing this gap with Connect AI, its AI-native data layer built on 15 years of enterprise connectivity experience, and what a new product release means for organizations ready to stop piloting and start deploying.

The Three-Pillar Framework That Separates Pilots From Production

Forshaw frames the problem around three pillars: connectivity, context, and control. Miss any one of them, she argues, and the move from pilot to production becomes impossible.

On connectivity, the issue isn’t simply whether an AI agent can reach a data source. It’s whether it can reach it deeply enough.

“Most MCP, or Model Context Protocol, servers on the market are really built around thin REST API wrappers,” Forshaw explained. “You actually need deep integration behind that MCP server to make it effective.” CData Connect AI connects to over 350 source systems—including SaaS platforms, ERPs, CRMs, databases, and data warehouses—using driver-level technology that surfaces all fields, objects, and custom entities.

In terms of context, the challenge is ensuring the AI doesn’t just receive data, but understands it. Forshaw described this as source-level intelligence: “If you’re asking Claude or ChatGPT to do something with Salesforce, it can understand how Salesforce works as a system—what accounts are, what objects are, and how they all relate to each other—much like a human would.” That structural understanding, she said, is what drives higher accuracy and fewer hallucinations.

On control, the concern is governance at scale. Forshaw was direct: relying on the AI model itself to enforce security is not a viable enterprise strategy. “You want to make sure you have a layer that is controlling that before it ever gets to the model or the tool.” Connect AI enforces identity-aware permissions that trace all the way back to the source system, recording which user or agent accessed which fields, through which tool.

Why Accuracy Compounds, And Why 98.5% Matters

One of the most compelling data points in the discussion was CData’s internal benchmark showing Connect AI achieving 98.5% accuracy on agent tasks, compared to 65 to 75% for other MCP architectural patterns. Forshaw explained why this gap becomes critical in agentic workflows. “You can start with a 75% accuracy rate at the beginning of five steps, and at the end of it you’re down to a 25% accuracy rate, which is a non-starter for most organizations.” The benchmark was run across nearly 400 prompts, four MCP architecture types, and five source systems, and the test harness has been made publicly available for replication.

What’s New in the Latest Connect AI Release

The latest release focuses on expanding all three pillars. On connectivity, CData added a connect gateway to enable live read and write access to sources behind the firewall, including SAP, SQL Server, and Postgres.

The most significant additions are in the context category, where CData introduced three distinct MCP tool types. Universal tools provide schema-aware, normalized operations across all connected sources, designed for multi-source reasoning without overwhelming the AI’s context window. Source tools offer tightly defined, system-specific operations, such as converting a lead or closing an opportunity in Salesforce, with granular toggle controls for what each user can and cannot do. Custom tools allow organizations to build purpose-built operations for specific repeatable workflows, with explicit data access limits that improve both precision and token efficiency.

Rounding out the release are workspaces, which define which datasets and schemas an agent can access, and tool kits, which define which tool types are available to a given agent. Together, they allow teams to create a dedicated MCP server scoped to a precise set of data and actions, which is the kind of deterministic boundary enterprises need before they trust an agent with production systems.

The Advice for Platform Engineers Under Pressure

Forshaw closed with practical guidance for platform and data engineers being asked to “do something with AI” without a clear roadmap. Her recommendation: invest in infrastructure that will scale regardless of which AI tool dominates tomorrow. “Individual MCP servers can certainly make the connection, but they can’t provide what you need to actually scale that deployment when you’re ready.” Building the connectivity, context, and control layer now, she argued, is the investment that makes every future AI initiative faster and more defensible.

Observability is not here to stay

Previous article

Why Security Standards Lag Behind Threats—And How to Stay Ahead | Steve Winterfeld, Akamai | TFiR

Next article