Observability

Network Intelligence Gets Operationalized in 2026: Kentik CPO on AI Ops and Rising Data Center Costs

0

Guest: Mav Turner (LinkedIn)
Company: Kentik
Show Name: 2026 Predictions
Topic: Observability

Network intelligence is moving from experimental projects to operational reality in 2026, and the shift will fundamentally change how enterprises manage infrastructure complexity, costs, and reliability.


📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot

Mav Turner, Chief Product Officer at Kentik, brings a unique perspective to these predictions. Kentik works with some of the most advanced technology companies and internet providers in the industry, giving Turner direct visibility into where network operations are heading. “We’re fortunate enough to work with some of the most advanced and leading-edge technology companies, internet providers, and technologists in the industry,” Turner explains. “So we get to have all these conversations, understand the problems, and see where traction is coming from.”

AI-Powered Network Operations Reach Maturity

The first major shift Turner predicts is the operationalization of network intelligence powered by AI. While enterprises have been experimenting with AI for network operations, 2026 is when these initiatives finally deliver measurable value at scale.

“A lot of teams had initiatives—the board said, ‘Hey, get more productive with AI.’ And everyone’s like, ‘What does that mean?'” Turner says. “We’ve been talking with a lot of customers, and I think this year is the year that we actually really start to hit that inflection point on being able to reduce the number of outages by leveraging AI, by reducing the number of escalations that have to go to tier two or tier three.”

The key is full lifecycle integration across customer environments. This means pulling data not just from network monitoring systems, but from all infrastructure systems to empower frontline teams—or even customers themselves—to understand and resolve issues without escalation. “That front line, or even the customer themselves, can access that data and be empowered by understanding what’s happening,” Turner explains.

For network intelligence specifically, the stakes are high. “Obviously, that’s key, because if the network’s not working, nothing’s going to work,” Turner notes. Enterprises that successfully implement these AI-powered strategies will solidify their approaches and actually extract value from integrations, whether with third-party vendors or homegrown systems.

Cloud Networking Finally Grows Up

Turner’s second prediction addresses a longstanding gap in cloud infrastructure: the maturation of cloud networking and the convergence of traditional network expertise with cloud operations.

When cloud computing first emerged, many teams believed networking complexity would simply disappear. “Everybody was like, ‘Great, now we don’t have to worry about networking anymore,’” Turner recalls. “And obviously we all know that’s not true, but a lot of networks, systems, and applications were designed not by people who understood networking—or by people who were really worried about it.”

Now, the complexity of hyperscaler networking stacks has reached a point where specialized expertise is essential. “It requires expertise,” Turner says. “Traditional network teams are learning cloud. The complexity of cloud is increasing. Application developers are understanding the value and need for having that expertise.”

This convergence is accelerated by recent cloud outages that exposed weaknesses in network design. Turner points to customers who invested in proper redundancy and weren’t impacted by AWS outages—investments that required defending budget to CFOs but ultimately paid off.

The result is a hybrid challenge requiring intentional strategy. “There’s a lot of on-prem data and systems that are connecting across multi-cloud,” Turner notes. “Between those issues and the maturity and convergence of these other work streams, I think we’re really going to see a lot of growing up in that industry this year.”

Infrastructure Spend Growth Masks Rising Costs

Turner’s third prediction carries both opportunity and risk: massive infrastructure spending will continue, but costs for standard workloads will rise significantly as data centers prioritize AI capacity.

“We’re going to create more capacity. I think we’re going to continue to see more AI—interesting, novel AI approaches,” Turner says. “The industry is starting to understand them more. We’re starting to get more value out of them.”

But this growth creates scarcity. “The cost for workloads—for hosting workloads—is going up,” Turner warns. “That’s the standard supply-and-demand scarcity situation going on here, and those data center hosting costs for standard workloads are going to continue to rise.”

The challenge is that data center expansion takes time, and prioritization favors AI workloads over traditional applications. This could trigger fundamental platform architecture reevaluations. Where lift-and-shift cloud migrations previously led to repatriation decisions based on cost, rising on-premises data center costs may force enterprises to modernize workloads to justify cloud economics.

“Companies will need to keep an eye on any renewals they have, any expansion they’re doing in these data centers, and really have larger-than-normal increases planned in the budget, because that rack capacity is getting scarce,” Turner advises.

What Enterprise Leaders Should Do Now

Turner offers specific guidance for CTOs and infrastructure leaders navigating these shifts.

First, address data center costs immediately. “If you haven’t already planned for this year and expected pretty drastic price increases—or locked in multi-year contracts to get you far enough out—then you might be a little bit at risk,” Turner warns.

Second, revisit cloud networking strategy and team responsibilities. “Do we really need to be multi-cloud? Do we benefit from being single cloud?” Turner asks. These decisions require explicit risk alignment with cost tradeoffs, and they need intentional planning rather than reactive building.

Third, develop a clear AI agent strategy. “Everybody in your organization is using AI. They’re using it in different places,” Turner notes. “How are you scaling this out in a way that reduces load, allows people to do higher-value work, and allows that full system integration to occur?” Enterprise leaders who prioritize progress here will see compounding returns.

For Kentik, the focus is on bringing intelligence proactively to customers rather than waiting for them to ask questions. “We want to push it into their systems. We want to proactively say, ‘You’re going to have a problem if you don’t do this,’” Turner explains. The goal is specific recommendations beyond basic capacity planning—sophisticated workload analysis that helps customers optimize networks before issues arise.

Turner remains optimistic about the opportunities these changes create. “The opportunity is expanding pretty drastically right now,” he says. “For people who have that desire to learn, desire to grow, who want to change—my main advice is just do it. Just try it. Just experiment with it. Get yourself over that mental barrier of ‘I don’t know,’ and just try it—you’ll be surprised.”

Future-Proofing AI: Why MCP Standards Beat Custom Agents in Rapidly Evolving Tech | TFiR

Previous article