Guest: Severin Neumann
Company: Causely
Show Name: 2026 Predictions
Topic: Observability
System complexity is growing faster than our ability to manage it. Observability data volumes are exploding, costs are spiraling, and AI is being deployed at scale without adequate guardrails. Severin Neumann, Head of Community at Causely, believes 2026 will be the year these trends collide in a major AI-related global incident—and it will serve as a crucial wake-up call for the industry.
The Complexity Crisis Accelerates
Severin’s first prediction centers on what he calls the continuation of a decades-long trend: system complexity will keep growing fast in 2026. “More services, more microservices—and during an incident, it’s harder than ever to pinpoint where the problem starts,” he explains. The challenge has evolved beyond identifying what looks broken. “It’s really about where the problem is. We need to find ways to locate it faster to reduce the impact on the business.”
This growing complexity creates a cascading problem. As organizations adopt microservices architectures and cloud-native technologies, the number of interdependencies multiplies exponentially. Traditional monitoring approaches that worked for monolithic applications simply cannot keep pace.
Why 50% Data Reduction Won’t Cut It
Severin’s second prediction challenges the current narrative around observability data management. He doesn’t expect to see a real breakthrough in reducing telemetry volume in 2026—and he hopes he’s wrong. “Everybody talks about how we have too much observability data and the bills are getting out of control,” he notes. “But at the same time, vendors say ‘We help you reduce your volume by 50% or 80%.’ If we take into consideration how complexity is growing and how the volume of data is growing, this is not making anything significant.”
The mathematics are stark. As system complexity grows exponentially, linear reductions in data volume—even 50% or 80%—don’t fundamentally change the trajectory. “What we really need is an order-of-magnitude change: 90% less, 99% less, or even 99.9% less, to a point where people will feel the change,” Severin argues.
The solution requires rethinking data architecture entirely. “We need to think much more about how we can keep most telemetry local, process it locally, and then only send back insights that we are collecting.” This shift from centralized data collection to distributed insight generation could lower costs and reduce noise—but Severin doesn’t see the industry making this transition in 2026.
LLMs Will Hit Their Operational Limits
For his third prediction, Severin takes a nuanced stance on AI. LLMs will stay very useful in 2026, but the industry will also discover their limits in operations and learn how to use them more appropriately. “LLMs are awesome and we use them all the time, everywhere,” he acknowledges. “But at the same time, we also see they’re not really good at finding causality. They’re really good at creating correlation. They can sound very confident and still tell you something that’s entirely wrong.”
The fundamental challenge is scale. “Data is exploding, complexity is exploding, and we cannot feed all of that into the LLMs and say, ‘make sense out of that,'” Severin explains. “Are we really going to take all the telemetry that we’re collecting and feed it into an LLM and say, ‘figure out what the problem is?'” He doesn’t believe LLMs can maintain a continuous state of complex environments—which is a critical requirement for effective operations.
The AI Incident That’s Coming
Severin’s fourth and most dramatic prediction: 2026 will see a major AI-related incident. Not small failures where an AI-built app breaks, but “a global incident—a major vendor using AI internally, where something slips through and breaks down, like the major outages we saw last year from the big cloud providers and other big companies everybody relies on.”
This prediction isn’t rooted in fear-mongering but in probability. As more organizations integrate AI into critical systems without fully understanding its failure modes, the likelihood of a significant incident increases. “I think this is going to be a big wake-up call that shows what impact it can have,” Severin says. “Hopefully people will use this to think about how we can put guardrails in place and how we can do safer releases.”
Navigating the Challenges Ahead
When asked about the biggest challenges organizations will face, Severin points to the human element. “We need to navigate that as humans. We need to figure out what to focus on, what to ignore, what’s just AI slop, and when AI is leading us down the wrong path.”
This challenge extends beyond observability into every aspect of work and life. “AI allows a lot of companies to build things really quickly, so suddenly we need to ask many more questions around what is the company I should be buying from and what media is worth consuming.” Cutting through the noise has become exponentially harder since AI’s arrival—and will only intensify.
Causely’s Approach: Causal Reasoning Over Correlation
Causely addresses these challenges by putting a causality layer on top of existing observability data. “We continuously look at causes and effects across your system, find symptoms, find the causes, and surface the emerging risks,” Severin explains. The platform pinpoints root causes and helps teams quickly resolve incidents.
The company is already doing local distillation of data into insights, processing most telemetry locally—the exact shift Severin believes the industry needs. “We want to help people get the insights they have. We will use AI where it’s helping us, where it’s making things better for summarizing and providing shortcuts, but we want to stay grounded in evidence so that we and our customers can act with confidence.”
Actionable Advice for Enterprise Leaders
Severin’s advice for enterprise leaders preparing for 2026 returns to fundamentals: “People buy from people. Work with the people you’re working with. Look at the people that you’re buying from—are they giving you the right things, the right tools, the product you want to work with?” As more capabilities are handed off to AI, the human aspect of technology partnerships becomes more important, not less.
He also warns about vendors who simply slap AI onto existing problems without addressing underlying architectural issues. “Be cautious about people that just show up and say, ‘we used AI to fix this one problem’ when you have this gut feeling of, ‘I can build this on my own, but what I’m missing is the non-AI part.'”
The predictions Severin shares aren’t doomsday scenarios but realistic assessments of current trajectories. By acknowledging these challenges now—growing complexity, inadequate data reduction, LLM limitations, and AI incident risks—organizations can take steps to prepare rather than scramble to respond when the inevitable occurs.





