Cloud Native

The Java Migration Paradox: Why 40% of Enterprise Apps Remain Trapped On-Premises

0

The enterprise Java landscape faces a stark reality: despite the cloud’s undeniable benefits, nearly half of all Java applications remain stubbornly anchored to on-premises infrastructure. It’s not for lack of trying—it’s because traditional cloud migration approaches have created barriers that many organizations simply cannot overcome.

George Gould, Senior Vice President of Corporate Development and Partner Alliances at Azul, recently shed light on this phenomenon during our discussion about the company’s partnership with Payara. The collaboration promises something that sounds almost too good to be true: codeless Java cloud migration that takes 15 seconds and requires zero refactoring.


📹 Going on record for 2026? We're recording the TFiR Prediction Series through mid-February. If you have a bold take on where AI Infrastructure, Cloud Native, or Enterprise IT is heading—we want to hear it. [Reserve your slot

The Real Cost of Java Migration Complexity

“For an enterprise to move a Java app, particularly a mission-critical, revenue-generating application, you often face complexities with that migration because part of the migration usually needs to include modernization,” Gould explained. This modernization requirement creates a cascade of challenges that have kept enterprise Java applications effectively imprisoned.

The most immediate hurdle is talent. Legacy applications that haven’t been actively developed require finding developers willing to dive back into potentially decades-old code. For many enterprises, this means either expensive consulting engagements or hoping existing teams can context-switch effectively—both costly propositions.

Security presents another layer of complexity. As Gould noted, enterprises must consider “not just security vulnerabilities within the application, but the containers themselves.” This dual security concern transforms what should be a straightforward lift-and-shift into a comprehensive security audit and hardening exercise.

Perhaps most importantly, there’s the risk factor. “The bigger the app, the more complicated the app, the greater the risk involved in whether they’re going to be successful,” Gould observed. Success isn’t just about getting applications running in the cloud—it’s about capturing the promised ROI through elasticity and cost savings.

A Six-Year Partnership Bears New Fruit

Azul’s relationship with Payara isn’t new—it spans six years and was originally built around helping enterprises migrate away from expensive commercial application servers like WebLogic and IBM WebSphere. “More than 80% of their deployments leverage our Core runtime,” Gould revealed, highlighting the partnership’s established success.

The new dimension combines Payara Qube—a developer productivity tool that abstracts Kubernetes complexity—with Azul Platform Prime, the company’s high-performance JIT compiler. The result is what Gould described as a “turbocharger” for Java applications.

“What Qube does is, through one button, take the JAR and WAR file and in about 15 seconds, deploy that and give you back the URL so you can access those services and that application through a web browser,” Gould explained. For Java developers unfamiliar with Kubernetes YAML configurations and orchestration complexities, this represents a significant productivity leap.

The Performance and Cost Equation

The technical elegance of this approach becomes clear when examining the performance implications. Platform Prime’s JIT compiler can deliver measurable improvements in application KPIs—particularly valuable for e-commerce platforms, trading systems, and ad servers where performance directly correlates to revenue.

“I jokingly say our JVMs go to 11—it’s a turbocharger you can apply,” Gould said, referencing the classic Spinal Tap scene. But the benefits aren’t just about raw performance. The same technology can optimize for cost efficiency: “We can give you the same KPIs you had before with a 20% or 25% reduction in infrastructure costs. Maybe you ran on 10,000 vCPUs before—we can reduce that to 8,000 or less.”

This dual capability—performance enhancement or cost optimization—addresses two primary enterprise migration motivations without requiring architectural changes or code modifications.

Security Through Strategic Partnerships

Recognizing that containerization introduces new security considerations, Azul has expanded its partnership ecosystem. Recent agreements with Chainguard and RapidFort—both hardened container specialists—provide additional security layers for enterprises concerned about container vulnerabilities.

“When enterprises move applications to the cloud, they need to think about not just the steps of possible modernization and understanding how to leverage the benefits of the cloud, but how to minimize security vulnerabilities—not just in the app, but in the containers themselves,” Gould emphasized.

This multi-vendor approach reflects the reality of modern enterprise security requirements, where continuous monitoring and threat mitigation have become operational necessities rather than one-time deployment concerns.

Compatibility Without Compromise

The “no code changes” promise rests on a foundation of rigorous compatibility testing. Every Azul JVM undergoes Technology Compatibility Kit (TCK) testing—a suite of over 110,000 unit tests that ensures adherence to Java specification standards.

“That has always been the mantra of Java: ‘Write once, run anywhere,'” Gould noted. “That compatibility is achieved through the JVM itself.” This testing regime provides the technical assurance that enterprises require when migrating mission-critical applications without the safety net of code modifications.

The approach also acknowledges that not every application requires maximum performance. Developers can choose between Platform Core for well-curated, secure JDK builds, or Platform Prime when applications demand enhanced runtime performance—all without architectural changes.

Java’s Expanding Role in Modern Architectures

Despite concerns that newer technologies might diminish Java’s relevance, Gould sees the opposite trend, particularly with AI workloads. “We believe that AI is only going to increase the use of Java because Java is a great tool and language to use when you’re building high-scale, high-performance applications,” he explained.

Rather than replacement, Gould envisions integration: “These existing applications—whether it’s an e-commerce application—will need to wrap AI services in a set of Java APIs and bring them into the existing applications.”

This perspective suggests that Java’s role in enterprise architecture will expand rather than contract, with existing applications serving as integration points for AI capabilities while maintaining their core business logic and data processing functions.

Looking Ahead: Continuous Innovation

Azul’s roadmap focuses on three key areas: developer productivity, cost optimization, and continuous security. The company’s Intelligence Cloud product already provides real-time vulnerability detection and code inventory analysis, but Gould hinted at expansions into technical debt assessment and application quality metrics.

“Are there ways that we can provide additional signals to containers and Kubernetes orchestration layers that make running Java applications in the cloud even more efficient, particularly when you’re talking about startup speeds and raw performance?” Gould asked rhetorically, suggesting that infrastructure optimization remains a primary focus.

The partnership with Payara represents more than a technical integration—it’s an attempt to remove the barriers that have kept enterprise Java applications trapped in on-premises environments. For DevOps teams managing legacy applications and executives evaluating cloud migration strategies, this approach offers a compelling alternative to the traditional rewrite-and-migrate paradigm.

As container startup times and application performance continue to drive cloud adoption decisions, solutions that deliver immediate benefits without architectural disruption may finally unlock that stubborn 40% of Java applications still waiting for their cloud migration moment.


TRANSCRIPT

Swapnil Bhartiya: As you all know, Java remains one of the most critical building blocks of enterprise applications, but migrating legacy Java workloads to the cloud can be complex, risky, and resource-intensive. What if you could skip the code rewrites—which is the most challenging part—and move faster without breaking anything? Today, we are diving into a partnership that promises to make cloud-native Java adoption far simpler for enterprise teams. Azul has teamed up with Payara to combine Azul Platform Prime with Payara Qube, enabling codeless lift-and-shift migrations without any refactoring. Today we have with us George Gould, Senior Vice President of Corporate Development and Partner Alliances at Azul, to talk about what makes this combination powerful, how it helps with performance and cost, and what’s next for Java in modern cloud environments. George, it’s great to have you on the show.

George Gould (1:10):  Great to be here. Thank you for including me.

Swapnil Bhartiya (1:12): Let’s start with the core offering. What makes the combination of Payara Qube and Azul Platform Prime so effective for codeless lift-and-shift Java migrations?

George Gould (1:23):  Most enterprises today, when they look at the Java apps they have on-premises—and the numbers we’re hearing now show that more than 40% of those Java apps are locked on-premises—have good reasons for this. For an enterprise to move a Java app, particularly a mission-critical, revenue-generating application, you often face complexities with that migration because part of the migration usually needs to include modernization. This usually means that if it’s a legacy application that is not being actively developed, you’ve got to find developers to go back and start retooling that application, which can be difficult. The other challenge is that these are important applications to the enterprise. Security is critical, and as enterprises move on-premises apps to the cloud, they have to consider not just security vulnerabilities within the application, but the containers themselves. The last piece is the risk associated with that migration. The bigger the app, the more complicated the app, the greater the risk involved in whether they’re going to be successful. Success can be defined by a number of criteria. One can be: did the ROI work? The reason they moved to the cloud was to leverage the benefits—maybe around elasticity, maybe around cost savings. Were they able to capture that? If not, that was a failed effort and will probably make it challenging for enterprises to look at the other applications they have.

Swapnil Bhartiya (3:07):  Can you talk about whether this is the first time you are working with Payara, or do you have an existing relationship with them?

George Gould (3:14):  Very good point. We’ve actually had a six-year history with Payara. Payara’s heritage has been around replatforming of J2EE applications. This is for enterprises that want to move off WebLogic Server or IBM WebSphere. That’s an important step to take, particularly because those licensing and support costs can be very expensive. When they did that, those customers were asking Payara to also provide a well-supported, well-curated build of OpenJDK to run those applications. Six years ago, we entered into an OEM partnership with Payara so they could provide those customers with our Platform Core product, which provides the most secure, well-curated builds of OpenJDK. That was the start of this partnership, and it has been quite successful. More than 80% of their deployments leverage our Core runtime.

Swapnil Bhartiya (4:25):  Can you quickly talk about the core offering of this combination of Payara Qube and Azul Platform Prime?

George Gould (4:34): The best way to think about Qube is as a developer productivity tool specifically for Java developers that abstracts out the complexity of taking a Java application—a JAR file, a WAR file—and moving that all the way to the cloud and deploying that in a Kubernetes environment. The first thing you need to do is containerize that application, and Qube helps automate that. It’s also important to point out that Azul just announced two partnerships over the last 30 days: one with RapidFort and the other with Chainguard. Both are hardened container vendors that fit into this strategy. When enterprises move applications to the cloud, they need to think about not just the steps of possible modernization and understanding how to leverage the benefits of the cloud, but how to minimize security vulnerabilities—not just in the app, but in the containers themselves. This is actually a multi-pronged partner approach. Qube’s first step is to containerize your application, which can be based on what’s available on Docker Hub, or if you have relationships with companies like Chainguard or RapidFort, you can leverage those hardened containers as well. Then you need to move that into a Kubernetes environment. For people familiar with that—particularly when it’s a complicated app, maybe a large J2EE app with a set of services that run around it—that can be complicated. You have a bunch of YAML manifest files that you have to configure for each app and service. You’ve got to configure and start up each service, each deployment, maybe pods that are necessary to spin that up so you can start testing that application. That can be a complicated process, and quite honestly, if you’re not familiar with Kubernetes, that’s a lot to ask of Java developers. What Qube does is, through one button, take the JAR and WAR file and in about 15 seconds, deploy that and give you back the URL so you can access those services and that application through a web browser. That’s powerful. The beauty of the partnership is that during containerization, Qube now provides the end user the ability to not only pair the application with Platform Core—the best, most secure, supported JDK from Azul—but you can also add Platform Prime, which is our high-performance JIT for the same JVM.

What’s really interesting about that is it can provide top-line benefits to your application. Maybe you’re an e-commerce application, maybe you’re a trading application, maybe you’re an ad server—you care about performance, speed, throughput. Zing is demonstrably a better JVM that can enhance the KPIs for that application. Whether your enterprise has a growth initiative and you, as a line-of-business manager, need to get more out of that application—the most important thing is to grow the top-line benefit, the revenue you can capture from that application, and to do it efficiently. Prime can also be used to address situations where you’re moving from the enterprise to the cloud, but you’re more focused on not improving KPIs but reducing the infrastructure cost to support that application. I jokingly say our JVMs go to 11—it’s a turbocharger you can apply. You can also tune the JVM for efficiency. We can give you the same KPIs you had before with a 20% or 25% reduction in infrastructure costs. Maybe you ran on 10,000 vCPUs before—we can reduce that to 8,000 or less. That’s powerful. So now, with one-button deployment through Payara Qube, you can build, containerize, and deploy into Kubernetes. When you deploy in the Kubernetes environment, you can instantly see application benefits—whether that’s performance, speed, top-line benefits, or bottom-line cost containment. And that’s powerful because you didn’t have to change a line of code to achieve that. That’s about reducing the risk of migration to the cloud.

Swapnil Bhartiya (9:26): Thank you for explaining it in detail. Today we live in a multi-cloud, hybrid cloud world, which also means complexity is multiplied. How does this partnership help simplify cloud-ready Java adoption in those kinds of environments? We can also talk about Edge here.

George Gould (9:47): The capability of Qube is to understand those different environments and the subtleties it takes to abstract out all the complexities of configuring Java applications for those different deployment topologies. The subtle differences between different hyperscale clouds, the differences between private and public—all of those are critical. It could be security requirements, it could be performance requirements. That’s all abstracted out into Qube, so the Qube capabilities and productivity can be delivered in any of those environments, whether it’s a private, hybrid, or multi-cloud situation.

Swapnil Bhartiya (10:29): When we talk about this partnership, we mostly discuss performance and cost, because these are major factors. But what are the other benefits that this collaboration brings? We can talk about observability, compliance, and other factors which may not directly contribute to performance improvement or cost reduction but play a very critical role in business operations and business continuity.

George Gould (10:56): Cost containment is certainly paramount when we talk to our customers about how they define success of any migration—it is about cost. The next set of issues, as you said, are around continuous visibility. Probably the most important one you didn’t mention is continuous security. As these mission-critical applications migrate to the cloud, it isn’t just about reducing the risk and vulnerabilities in the application. We have tools today to help do that. We have products like Azul Intelligence Cloud, which can provide real-time information about the vulnerabilities in your existing applications that are running in either on-premises, hybrid cloud, or multi-cloud environments. But the most important thing is also the containers themselves—understanding the vulnerabilities and really thinking about the cloud not as a one-time push around security, but getting a mentality of continuous security. This is what you have to do on almost a nightly basis to ensure that the application and the container are secure.

Swapnil Bhartiya (12:11): One of the core strengths of this announcement is also “no code change, no refactoring required.” This seems like a big deal, but at the same time, refactoring is also a huge challenge in itself. Java has been around for so long, and the way it’s used is beyond imagination. Now you’re moving to hybrid and cloud environments. How does this partnership ensure the code doesn’t change? How do you assure compatibility when you don’t even know how somebody is using it—some are legacy applications, some are modern applications?

George Gould (13:04): That assurance is really provided through the JVM itself. Every JVM that Azul has ever shipped has been TCK tested. This is a suite of over 110,000 unit tests that we run against every binary build of our JDKs. That is the starting point for our quality testing to ensure that our JVM meets the spec standards for that given version of Java. That has always been the mantra of Java: “Write once, run anywhere.” That compatibility is achieved through the JVM itself. The nice thing is, Java has always been a portable language. It still doesn’t mean that when you deploy to the cloud, you’re going to be able to ensure you have the same level of performance. That’s where having the option in Payara Qube to pick the JVM that best fits the needs of that application becomes important. A developer might have a very simple service and want to use Platform Core because they want a well-curated, supported, and secure JDK, but performance isn’t critical. Sometimes an application will need better runtime performance, and by applying Platform Prime with the Zing JDK—a turbocharger you can essentially attach to that application—you can get better metrics, achieve greater throughput, see faster compilation, and just flat-out performance that translates to a better experience and greater KPIs. All of that was done through two assurances: One, you’re in a JVM that will ensure your application will run in those cloud environments. Two, it’s going to run better, and you didn’t change a line of code. It doesn’t mean that in time you can’t make a decision, once you’re in the cloud, to look at additional refactoring. But it doesn’t become a barrier for not moving the application at all. I think that’s the most compelling aspect of the partnership—not just with Payara, but also with Chainguard and RapidFort. How can you alleviate the barriers to unlock that 40% of Java applications that are still on-premises? You have to make it compelling, cost-effective, and give them a reason. The reason is that by combining the technologies and partners I just talked about, you’re going to get a better experience and better application runtime that’s either going to achieve greater KPIs and milestones or cost savings. That’s compelling—and again, no code change.

Swapnil Bhartiya (16:02): Java has been around for a very long time. When we talk to companies like Azul, most folks forget about the role of mainframes and the Linux kernel. We talk about shiny new objects like Kubernetes and ChatGPT and GenAI. But the fact is, Java has been around for a long time and is evolving rapidly thanks to the community you folks have built, especially when we talk about cloud-native and containerized workloads. How are you seeing customer expectations shifting when it comes to Java in modern architectures?

George Gould (16:38): I think the biggest feedback we continue to get is that it’s never fast enough. I think there’s more we can do around compiler optimization and startup speeds. Customers expect that when they’re meeting loads, when they’re scaling out an application or scaling up, they expect these containers to start instantly and the applications inside them to start quicker. That’s important when you look at the value of the cloud, particularly around elasticity and meeting demand—both scaling up and scaling down. We need to be better, we need to be more efficient, and I think that’s something we need to continue to focus on and leverage through innovation. Azul is certainly going to be working with the community to put forth innovations within the open-source project, but I think there are things we can do today within our product that are in advance of that—to be a leader and thought leader in that space by demonstrating what we can do today.

Swapnil Bhartiya (17:41): What role do you see Azul and Payara playing in helping Java stay relevant—and saying that Java is still relevant is kind of a redundant question because Java is playing a very significant role—but what role are you folks playing in ensuring that even modern users continue to use Java in high-performance cloud environments effectively and efficiently?

George Gould (18:09): I think it’s, as you pointed out, continually ensuring that we’re providing Java runtimes in the environments that customers need. You pointed out the edge. We have a brand of our product called Zulu Core for Redistribution that are embedded in some very important products. They’re embedded today in cars, in infotainment systems. We’re embedded in routers, in network devices. This is about the dream that Java should be everywhere. Many companies have lost focus and think of Java just as server-side. We continue to support desktops. I think this is about keeping Java relevant, which also means keeping it active and used in all kinds of use cases and environments—whether it’s the edge, whether it’s mobile devices, all the way up to large servers.

Swapnil Bhartiya (19:12): You mentioned some modern workloads, and these days the hottest topic is AI, GenAI, and LLMs. I’m asking two questions: How does Azul look at AI as workloads that folks are running? And how is Azul leveraging AI and GenAI to help customers?

George Gould (19:34): I think if you look at products like GitHub Copilot and many other tools that are helping to generate code, I think it’s a perfect fit. Anytime we can make a developer’s task of writing code, iterating, improving, and simplifying easier, that is great. I think there’s a great fit for AI in that space. We see AI, though—when you really look at how the enterprise is using Java today—in the most important mission-critical applications. We don’t see Java getting replaced anywhere. In fact, we believe that AI is only going to increase the use of Java because Java is a great tool and language to use when you’re building high-scale, high-performance applications. The idea is that these existing applications—whether it’s an e-commerce application—will need to wrap AI services in a set of Java APIs and bring them into the existing applications. We see that there will still be important languages like Python that will be closer to building out key sets of services, but ultimately those services will be wrapped in Java and integrated into the applications that we use every day. So it’s not a replacement—it’s more of an integration. And the points of integration will continue to be Java, which I think is the de facto language for that.

Swapnil Bhartiya (21:13): What’s on the roadmap for Azul? Just teasers—don’t reveal too much. What kind of partnerships or product innovations are in the pipeline that you’re excited about and your teams are working on?

George Gould (21:29): As I pointed out, there are three dimensions that we look at for how we decide to build versus buy versus partner. Developer productivity is critical. We think there’s a lot we can do ourselves and partner to augment that. Our Intelligence Cloud product today offers a number of important functionalities. We have the ability that when you have a running application, we can tell you that there are portions of your code that are now vulnerable. Maybe you didn’t test it for months because you haven’t pushed out a new release. We can also give you information about your code inventory—things like dead code. Those are great features. We think there’s more that we can build into Intelligence Cloud. It’s an extensible architecture—things like technical debt, more information about the quality of your application. I think there are areas there to innovate and possibly partner. The other area is going to be around cost savings. Are there ways that we can provide additional signals to containers and Kubernetes orchestration layers that make running Java applications in the cloud even more efficient, particularly when you’re talking about startup speeds and raw performance? I think you’re going to see innovations from Azul in these areas to make it easier to scale faster, go faster, and be more efficient—everything from right-sizing your VMs to just saving costs on infrastructure by being more efficient.

Swapnil Bhartiya (23:10): George, thank you so much for joining me today and giving us an update on this partnership and also how Java is evolving and growing, and how modern organizations are bringing Java into modern workloads. Thank you for sharing all those great insights, and I look forward to chatting with you folks again.

George Gould (23:28): Thank you so much. Great to be included. Look forward to our next conversation.

How Egen Turns Google Cloud + Salesforce AI into Real Business Value

Previous article

Why GenAI Needs a Smarter Firewall — And How Akamai Plans to Deliver It Across the Edge

Next article