AI has made engineers 3x more productive at writing code, but that productivity creates a cascading operational burden: 3x more applications being built means 3x more releases hitting production, which means 3x more work for platform engineering and SRE teams. Add AI inferencing workloads—custom models and fine-tuned models deployed to GPU-attached Kubernetes clusters—and the pressure shift is undeniable: platform teams need automation at scale, or they drown in deployment frequency.
The Guest: Hong Wang, Co-founder and CEO at Akuity
The Bottom Line
- AI creates both challenge and opportunity for platform teams: 3x more code means 3x more releases and operational burden, but AI can also autonomously distill operational data to identify root causes and reduce SRE toil
***
Speaking with TFiR, Hong Wang of Akuity defined the current platform engineering landscape and explained how AI is reshaping both workload types and team productivity dynamics.
What Are the Platform Engineering Trends for 2026?
Wang identified two primary trends: the rise of AI inferencing workloads requiring GPU-attached infrastructure, and the productivity paradox where AI-enabled engineers generate 3x more code—creating 3x more operational work for platform teams.
Hong Wang: “Number one, platform teams are thinking about the fact that a lot of our customers are running inferencing loads. They want to deploy their custom model or fine-tuned model. It’s a unique new workload that we have to handle and deploy to Kubernetes clusters with GPUs attached. That’s common, that’s growing, and we see more and more demand for that.”
AI inferencing workloads differ from traditional microservices deployment patterns. Custom models and fine-tuned models require GPU resources, specialized cluster configurations, and different scaling behaviors. Platform teams are adapting delivery pipelines to accommodate both traditional application deployments and AI model deployments simultaneously.
Broader Context: The AI Productivity Paradox
Wang explained that AI coding assistants have made individual engineers significantly more productive—but that productivity translates directly into increased operational burden for platform engineering and SRE teams managing production infrastructure.
Hong Wang: “AI is making every engineer three times more efficient. Originally, you have five engineers who can fix five issues a week or 20 features a week. Right now, AI is making every engineer three times more efficient. So we see more applications being built, more changes being released to the cluster, to the runtime. We see a lot of pressure shifting—there’s more demand for automation, more demand for deployment, and they’re deploying things to production more frequently. That’s why we see the growth.”
This productivity surge creates a compounding challenge: not only are there more applications to manage, but each application is being updated more frequently. The operational burden on platform teams has increased proportionally—more releases mean more potential incidents, more rollbacks, more troubleshooting, and more manual intervention unless automation scales accordingly.
Wang framed AI’s impact as dual-natured: it creates both challenges and opportunities for platform teams.
Hong Wang: “AI is definitely adding more burden and challenges to SRE teams and platform teams because they have more work to do—they see more changes, more releases, more changes to production. On the other side, I also see the opportunity. AI can really play a substantial role to make their life even happier, even better, because AI can help you efficiently look at a vast number of data, a vast number of signals, trying to distill what is really important, what matters, what is the root cause.”
The opportunity side of the equation: AI can autonomously analyze operational telemetry, logs, events, and metrics to identify root causes faster than human operators manually triaging incidents. This shifts SRE responsibilities from reactive troubleshooting to proactive guardrail definition—defining the symptom-solution patterns that AI uses to autonomously remediate issues.
Wang shared his own team’s experience with AI adoption. Akuity employs 25 engineers, and AI has become a daily part of their workflow.
Hong Wang: “Every time I sit down with my engineering team nowadays, every day is talking about AI. They’re so impressed—’I thought AI couldn’t do this, but actually it’s 75% there. Sure, there’s some small gap, but it’s working.’ It makes their life way better and way easier now. They feel they’re never being blocked by needing to code something up. It’s easier for them to run through all ideas, get a prototype, get a POC, and eventually get to the finish line. It’s getting much smoother now. It’s game-changing for everyone.”
The “Superman feeling” Wang referenced captures the psychological shift: engineers are no longer bottlenecked by implementation details. Ideas move from concept to prototype to production faster, with fewer friction points. This acceleration drives business value—more features shipped, more bugs fixed, more customer value delivered—but it also requires platform teams to evolve their automation strategies to keep pace.
Wang’s advice for platform teams: embrace the dual nature of AI with open-minded experimentation. Prototype AI-powered workflows, test autonomous remediation for well-understood issues, and iterate on guardrails. The teams that adapt fastest will turn the productivity paradox into a competitive advantage.
Watch the full TFiR interview with Hong Wang here.





