groundcover, a cloud-native observability platform challenging legacy solutions like Datadog and New Relic, has launched a Model Context Protocol (MCP) server that promises to transform how development teams leverage AI for more reliable code.
In this TFIR exclusive, Swapnil Bhartiya speaks with Orr Benjamin, VP of Product at groundcover, about the launch of their new MCP server designed for LLM-driven observability. Orr explains what Model Context Protocol (MCP) is, why it’s gaining traction, and how groundcover’s innovative architecture enabled rapid MCP development. The conversation explores the impact of AI on observability, security implications, ethical concerns, and how customer feedback fuels innovation at groundcover.
Q&A: Orr Benjamin, VP of Product at groundcover
Swapnil Bhartiya: Welcome to TFIR. Today we have with us Orr Benjamin, VP of Product at groundcover, to talk about your new MCP server for LLM-driven observability. Let’s start by reminding our viewers: What is groundcover all about?
Orr Benjamin: groundcover is a modern, cloud-native observability solution. We’re built to replace legacy tools like Datadog and New Relic. With our advanced eBPF sensor, we allow customers to see everything in their environment without any instrumentation or development effort. Our solution significantly reduces costs while enabling full observability, and we support a bring-your-own-cloud architecture so customers retain control over their data.
Swapnil Bhartiya: You’re making a big announcement around MCP. What exactly is MCP, and what’s driving the recent surge in interest?
Orr Benjamin: MCP (Model Context Protocol) is an open-source protocol introduced by Anthropic, the company behind Claude. It’s quickly becoming a standard for enabling AI agents to access external data sources for context. Think of it like a USB-C connector—plugging into powerful datasets. The more context an LLM has, the more effective it becomes. MCP makes that integration seamless.
Swapnil Bhartiya: How did groundcover manage to build the MCP server so quickly?
Orr Benjamin: Our architecture was designed from day one to expose data. Initially, that was via APIs for our customers, but the transition to MCP was a natural evolution. Whether it’s topology, logs, or traces, we already had the infrastructure to make it accessible. MCP just layered on top of our existing platform.
Swapnil Bhartiya: Can you elaborate on the internal workings of your MCP implementation?
Orr Benjamin: Sure. We prioritized increasing what we call “knowledge density”—taking billions of logs and surfacing the most meaningful signals. We created tools like get_log, get_trace, and get_alert with parameter support, allowing agents to filter and request data efficiently. It’s all about surfacing the right data for AI to make smart decisions quickly.
Swapnil Bhartiya: What role did customer feedback play in developing this?
Orr Benjamin: Customer input was everything. We got daily questions about MCP from our Slack channels. We’d ask them what they wanted to do with it, and their use cases—from AI IDE integration to real-time support log access—shaped our development priorities. Their excitement and input drove rapid iteration.
Swapnil Bhartiya: We’ve covered observability from its early days—metrics, logs, traces. With OpenTelemetry, security-first culture, and now AI, what’s the future of observability?
Orr Benjamin: AI represents the ideal Site Reliability Engineer (SRE). It can correlate billions of events across logs, traces, and metrics to pinpoint root causes. Tools like Cursor can use insights from groundcover, access your codebase, and propose fixes—end-to-end. It’s a massive productivity leap.
Swapnil Bhartiya: There are also ethical concerns with AI—hallucinations, privacy breaches. What precautions are you taking?
Orr Benjamin: Security and ethical AI are critical. We built a generic data access layer that allows fine-grained control over what data is exposed. Whether it’s an AI or a human, access is filtered by cluster, service, or namespace. And since we use a bring-your-own-cloud model, your data never leaves your environment.
Swapnil Bhartiya: What other challenges are you solving where legacy vendors like Datadog or New Relic fall short?
Orr Benjamin: One area is AI-assisted migration. We’re making it easy to switch from Datadog to groundcover by using AI to translate alerts, dashboards, and other assets. Another is remote config for eBPF sensors—think red-button observability controls. Also, we’re working on comparison analysis, which enables agents to compare “good” vs. “bad” data sets and auto-investigate anomalies.
Swapnil Bhartiya: Orr, thank you so much for joining us and sharing these insights. We look forward to hearing more from groundcover as your platform continues to evolve.
Orr Benjamin: Thanks for having me, Swapnil. We’re just getting started.





