Cloud Native

Fixing Java’s Warm-Up Problem in the Cloud | John Ceccarelli, Azul

0

Guest: John Ceccarelli (LinkedIn)
Company: Azul 
Show: Java Reloaded
Topic: Cloud Native

Cloud-native promises elasticity: spin up new capacity instantly, pay only for what you use, and scale smoothly under load. But for enterprises running Java, the reality has been more complicated. JVM warm-up times, long initialization phases, and slow first transactions have forced teams to overprovision resources—driving up costs and limiting the agility cloud was supposed to deliver.

In this clip, I sat down with John Ceccarelli, VP of Product Management at Azul, to unpack how their Optimizer Hub tackles this decades-old problem.

The Warm-Up Bottleneck

“The major problem with Java in cloud workloads is that the behavior at the beginning of a Java server’s life is problematic,” Ceccarelli explained. A new JVM instance not only starts slowly, it also delivers poor performance for early transactions while it figures out what to optimize. In traditional environments, always-on servers masked the issue. But in cloud-native setups—where Kubernetes reschedules workloads frequently or serverless spins up JVMs for each request—the impact is severe. Enterprises end up leaving resources always-on just to avoid poor user experience, wasting spend and energy.

Taking Optimization Off the Server

Optimizer Hub changes the equation by offloading just-in-time (JIT) compilation and optimization to an external service. “The JVM wakes up and says, ‘I’m running this app’. And it asks Optimizer Hub, ‘Help me out here,’” Ceccarelli said. Optimizer Hub performs the heavy lifting externally, caching optimizations across workloads and delivering tuned code back instantly.

This means new JVMs no longer burn CPU cycles on warm-up, and users don’t suffer degraded performance. Enterprises can finally use auto-scaling and elasticity with confidence.

10,000 JVMs, No Code Changes

Skeptics often ask whether this works at scale. Ceccarelli was quick to emphasize: “We have large customers running their entire estates on this. One Optimizer Hub instance can handle 10,000 JVMs.” And it works across Java 8 and newer, requiring no code changes or recompilation. “You switch the JVM, hook it up to Optimizer Hub, and off you go,” he said.

Supporting Mixed Workloads

What about diverse application patterns? Optimizer Hub handles that too. Each app gets its own optimization context, ensuring that shared methods are tuned appropriately per workload. “Both apps are getting the absolutely fastest version possible for the specific way they use that method,” Ceccarelli explained.

A Cloud-Ready JVM

By removing warm-up bottlenecks, Optimizer Hub makes Java truly cloud-ready. Enterprises can cut waste, improve sustainability, and deliver better user experience—all while running familiar codebases. For developers and platform teams, the takeaway is clear: scaling Java no longer means choosing between elasticity and performance.

Why Resilience Matters More Than Sovereignty — Thierry Carrez, OpenInfra Foundation

Previous article

How Akamai and FS-ISAC Are Tackling the New Wave of DDoS Attacks

Next article