Guest: Kolton Andrus (LinkedIn)
Company: Gremlin
Show Name: 2026 Predictions
The artificial intelligence gold rush is about to hit a critical inflection point. After two years of unprecedented hype and investment, 2026 will force a reckoning between AI’s promises and its actual capabilities—and the companies that survive will be those that solve the data control dilemma while maintaining engineering fundamentals.
📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot
Kolton Andrus, CEO of Gremlin, has a clear-eyed view of what’s coming. His company focuses on building reliable distributed systems and preventing outages before they happen—a mission that’s becoming more critical as organizations rush to implement AI without considering the reliability implications.
The Data Control Battleground
“Data is becoming important again,” Andrus explains. “Who owns the data? Where is the data? Who has control of the data? These are going to be pretty hot topics.” This isn’t just a philosophical question—it’s a practical business challenge that will define AI implementation strategies in 2026.
Large corporations face a fundamental tension: they need their internal tribal knowledge and proprietary data to train effective AI models, but they can’t risk that data leaving their walls or becoming public.
“If I’m a large corporation, all of the tribal data and knowledge we’ve accumulated internally is the data we want to feed into LLMs and AI so that we have accurate, actionable information. But I don’t necessarily want that data leaving our walls or becoming public,” Andrus says.
The solution lies in higher quality data sources. Training models on internet-scraped data won’t meet enterprise standards. Companies will need to find ways to leverage their proprietary data while maintaining security and control.
The Trough of Disillusionment
Andrus predicts a painful transition ahead. “I think we’re going to go through a bit of a trough of disillusionment with AI,” he says. “We’ve been sold some pretty high expectations over the last two years, and we’ve definitely made forward progress. But I think the promise we’ve been given is going to be hard to fulfill.”
The core problem? Eighty percent accuracy isn’t good enough for most business applications. “We’ve automated a lot of those problems in infrastructure, in code, and in other places. But as businesses and as a society, we really can’t operate at 80% accuracy. We need something that is accurate, trustworthy, and credible,” Andrus explains.
As AI companies move beyond their initial honeymoon period, investors will start demanding real results. “A couple of years in, people are just going to start demanding results, and that’s going to force a lot of these companies to make hard decisions—whether this is good enough or it isn’t,” he notes. Many companies won’t make it through this shakeout.
From Vibe Coding to Vibe Failures
The rush to implement AI has created what Andrus calls “vibe coding”—rapid development driven by fear of missing out rather than sound engineering principles. “A lot of companies and teams have a mandate to go figure it out. There’s a lot of FOMO—fear of being left behind—where people are saying, ‘We have to go all in on this,’” he observes.
While vibe coding works for prototyping and quick experimentation, it won’t meet enterprise standards. “Vibe coding is going to lead to a lot of vibe failures and vibe outages that will force people to go back and rethink some fundamentals and make things higher quality, more resilient, and more reliable,” Andrus warns.
Don’t Abandon the Fundamentals
Andrus’s advice for enterprises is refreshingly pragmatic: focus on the basics. “Don’t ignore the fundamentals. Don’t forget the basics. Hire good teams, build quality software, do good testing, have a clear vision, and articulate it well,” he emphasizes.
The key is finding the right balance. “Leverage AI, but don’t let it do the thinking for you,” Andrus says. “We should use it for what it’s good at, but we need to keep our own thinking sharp—especially around identifying the most important problems to solve for our customers and our teams.”
Companies should allocate time for AI experimentation—perhaps 10–20% of capacity—while maintaining a strong focus on core engineering principles. “We shouldn’t abandon the fundamentals along the way,” he cautions.
Making Quality Easy
For Gremlin, 2026 means doubling down on its core mission. “We like to make it easy to do the right thing,” Andrus says. The company plans to leverage AI for better recommendations, automated log searching, metric parsing, and context-aware guidance—helping customers get the data they need to make quick decisions and resolve problems quickly.
The goal is to enable velocity without sacrificing reliability. “We want to make it easy to do the fundamentals and cover your bases without losing that velocity,” Andrus explains. “That’s the ideal, utopian world we want to live in—where we’re moving fast, using the tools we have to automate and quickly accomplish the tasks we need, while still focusing on the fundamentals and building great software to solve customer problems.”
As AI enters its reality-check phase, the winners will be those who maintain engineering discipline while thoughtfully integrating new capabilities—not those who chase hype at the expense of reliability.





