Observability used to be a post-mortem tool. In the era of agentic AI, it’s the foundational grid of truth that AI agents need to write, test, and troubleshoot production code.
The Guest: Shahar Azulay, CEO and co-founder at groundcover
The Bottom Line
- Observability platforms are no longer reactive troubleshooting layers—they’re becoming the operating system for AI-driven software development, providing the real-time production context that AI agents need to build, test, and debug code autonomously.
***
Speaking with TFiR, Shahar Azulay of groundcover defined the current state of observability as a fundamental shift from post-production investigation to real-time AI agent infrastructure.
What Is Observability in the Agentic AI Era?
Azulay explained that observability is transitioning from a tool used primarily for investigating downtime and troubleshooting production issues to a foundational layer that AI agents depend on to operate effectively across the software development lifecycle.
Shahar Azulay: “Observability is changing from something that was more post-production focused—I used to use it to investigate downtime and troubleshoot issues, and I needed a lot of telemetry to do so—to something that is now pivotal, a grid of truth for any AI operating app and the agents running on top of it. Whether I’m performing root cause analysis or building code with my agents, I need context from production that observability can provide to build better and troubleshoot better. Observability is becoming more critical and a bigger part of the operating system for the agentic SDLC. People are building code with it, troubleshooting code with it, and testing code with it.”
Broader Context
During the full TFiR interview at KubeCon EU, Azulay elaborated on how groundcover’s architecture—a bring-your-own-cloud model that stores customer telemetry in their own cloud premises—enables this new paradigm. The company announced agent mode at KubeCon, which allows users to operate with groundcover’s AI agent throughout the platform for tasks like navigating dashboards, troubleshooting, and delegating context to external AI-native tools such as Cursor for code generation.
Shahar Azulay: “groundcover has always been different. We operate on top of a system that is very well-suited to the area we operate in—a bring-your-own-cloud architecture, which basically means we don’t store our customers’ telemetry. We host it in their cloud environment, privately secured, and we’re designed for data abundance in a very cost-effective way. Customers can have more telemetry, including more high-fidelity telemetry, to troubleshoot. This is exactly what AI needs as well. We’ve announced at KubeCon our agent mode, which is basically our agentic experience inside groundcover. You can either operate with our agent throughout the platform—navigate, build dashboards, and troubleshoot—or you can delegate tasks using the agent to other agents used today. If you’re using Cursor to build code, you can delegate from the agent mode in groundcover and get all the context from production into your other AI-native stack. groundcover is basically becoming agent-to-agent aware, enabling communication with both human users and agent users.”
The convergence of observability and agentic AI represents a fundamental shift in how production telemetry is consumed—not just by human operators responding to incidents, but by AI systems making autonomous decisions throughout the software development lifecycle.
Watch the full TFiR interview with Shahar Azulay here.





