There is a gap widening inside enterprise IT organizations right now. On one side: ambitious AI roadmaps, approved budgets, and executive pressure to show results. On the other: the brutal operational reality of standing up AI infrastructure that is regulatory-compliant, cost-predictable, and actually capable of running agents in production — not just in a proof of concept.
Most enterprises have discovered that getting to production AI requires far more than a hyperscaler account and a model API key. It requires aggregating data from siloed legacy systems into platforms capable of powering large language models and small language models. It requires building and managing a private cloud ecosystem that satisfies compliance teams in healthcare, financial services, and other regulated verticals. It requires GPU and CPU procurement strategy at a time when chip availability is as constrained as the engineering talent to deploy them. And critically, it requires a repeatable operating model — so that once the first agent is in production, the second and third can follow without rebuilding the stack from scratch each time.
This is precisely the execution gap that Rackspace Technology has positioned itself to close. Not as a neocloud inference provider, and not as a traditional managed services company — but as an end-to-end operationalizer of the enterprise AI ecosystem. Rackspace’s approach combines governed private cloud infrastructure (built on VMware Cloud Foundation), strategic partnerships with AI outcome platforms like Palantir Technologies and Uniphore, and an FDE Pod Model that embeds forward-deployed engineers with mixed skillsets directly inside customer environments.
The logic is straightforward: if the CIO’s team no longer has to think about infrastructure operationalization, compliance architecture, chip procurement, or ecosystem integration — they can focus entirely on agents, use cases, and business outcomes. That is the proposition Rackspace is taking to the enterprise market in 2026, and it is a significant bet on the idea that the companies that win the AI race will be the ones that can operationalize it, not just theorize about it.
Joe Vito brings 25 years of CIO and cloud transformation experience to this conversation — including prior roles at AWS, Dell-EMC, UBS AG, and Dun & Bradstreet — and speaks with rare candor about what is actually blocking enterprises, what CIOs should prioritize, and why Rackspace believes the private cloud model will prove more durable than the hyperscaler route for AI at scale.
The Guest: Joe Vito, SVP of Strategic Alliance Partnerships at Rackspace Technology
Key Takeaways
- Enterprise AI stalls at three layers: data aggregation, ecosystem operationalization, and operational cost unpredictability — Rackspace removes all three.
- The FDE Pod Model deploys forward-deployed engineers with mixed skillsets (AI use case identification, platform operation, data architecture) as a repeatable deployment unit against any AI platform.
- Rackspace’s private cloud, built on VMware Cloud Foundation (VCF), enables cloud modernization with less disruption and a shorter timeline than hyperscaler migration, while delivering regulatory compliance for healthcare and financial services.
- The Palantir and Uniphore partnership stack gives enterprises access to AI outcome-based platforms (Foundry, AIP, Business AI Cloud) running on governed, managed private cloud infrastructure — with Rackspace owning the operational layer.
- CIO advice from a practitioner with 25 years of experience: accelerate data to AI platforms, stop federating AI across too many tools, and let partners operationalize the ecosystem so your team can focus on agents and business outcomes.
***
In this exclusive interview with Swapnil Bhartiya at TFiR, Joe Vito, SVP of Strategic Alliance Partnerships at Rackspace Technology, discusses why enterprise AI initiatives stall before reaching production, how Rackspace’s private cloud and FDE pod model close the execution gap, and what CIOs should prioritize in 2026 to move from AI experimentation to measurable business outcomes.
The Three Blockers Keeping Enterprise AI Stuck in Pilot
Despite years of investment and intent, most enterprise AI initiatives have not crossed the threshold from pilot to production. The reasons are consistent across industries and customer sizes — and they operate at multiple layers of the technology stack simultaneously.
Q: What’s actually blocking enterprises from moving from “we want AI” to “AI is working for us”?
Joe Vito: “What we hear from our customers are blockers at a couple different levels. One is, we need to aggregate data to actually leverage the technology. That’s one blocker we’re helping customers get past. The second one is standing up the ecosystem. That’s probably the hardest exercise. This is new technology that requires a more purpose-built infrastructure, as well as the software layer, which creates more of a private cloud capability or an infrastructure foundation. And then you’ve got the AI outcome-based platforms — Palantir and Uniphore are sitting on top. So operationalizing that is hard. Operationalizing it so that it’s regulatory compliant is hard. And then obviously running it. What we at Rackspace are doing is operationalizing the ecosystem and giving you a regulated compliant private cloud with which to run the AI platforms.”
Joe Vito: “What does that really translate to for our customers? It’s very simple. They have to worry about aggregating and bringing the data into the platforms, modeling, maybe creating SLMs, building agents, and then end-to-end deployment. That’s their only focus. We take out the question of how do you get the data into these platforms. We take out the ecosystem build out. And then finally, the other element that people get quite concerned about is what’s the operational cost and how do you manage it. We are also now taking that out of the equation and trying to provide more predictable costs as you run your platforms.”
The FDE Pod Model: Forward-Deployed Engineers as a Repeatable Deployment Unit
One of the most significant structural shifts in how enterprise AI is delivered is the rise of the forward-deployed engineer (FDE) — a practitioner embedded directly inside the customer organization, responsible not just for infrastructure but for identifying use cases and accelerating agent deployment. Rackspace has formalized this into what it calls the FDE pod model, built in collaboration with partners including Palantir.
Q: Why is the forward-deployed engineer model gaining traction now, and what does it actually look like for your customers?
Joe Vito: “Once you operationalize it, the point is that there’s a different skillset racked around this capability. Rackspace has made a very deliberate decision to say we’re not only going to operationalize the ecosystem and stand up this capability, but we’re also going to come in with forward-deployed engineers. Working with partners like Palantir, we have what we call an FDE pod model. We understand how to deploy FDEs of different skillsets. There’s going to be an FDE that comes in and helps you understand where the opportunity is — what use case is going to produce significant business outcomes. And then the second part is we’ll have FDEs that actually operate the platforms. We may also have to supplement that with traditional data architects who are helping you aggregate data.”
Joe Vito: “We look at it as a deployment of a set of skillsets inclusive now with FDEs with some traditional capabilities where needed. Part of this exercise — that’s part of the acceleration exercise: operationalize the ecosystem, bring in the pod model of resources to execute, and then accelerate that agent into production. You’re going to have to do this once, and then you’re going to do it over and over and over again. There’s a repeatable process that goes into this. That’s also the methodology we use as part of Palantir’s and part of Uniphore’s. We created a Rackspace pod model which is deployable against any of our AI platforms.”
The Palantir Partnership: Private Cloud AI Infrastructure, Managed Operations, and Agent Deployment at Scale
Rackspace’s partnership with Palantir Technologies — announced in February 2026 — places Rackspace as the managed operations and private cloud infrastructure layer beneath Palantir Foundry and AIP. This is not a reseller relationship. Rackspace owns the operationalization of the environment, the regulatory compliance architecture, and the forward-deployed talent that runs the platform and accelerates agent deployment.
Q: Walk us through the Rackspace-Palantir partnership. What does Rackspace own in that stack, and why does it matter for customers?
Joe Vito: “Once you operationalize the ecosystem, you’ve taken that exercise out. Once you’ve said we’re going to come up with more predictable costs, you’ve taken that challenge off the plate. Customers can just focus on the business of AI. Our entire approach to AI is accelerate business outcomes. When you look at the other options in the marketplace, those challenges still exist. People maybe get past some of those challenges and deploy a pilot. We’re well past pilots. We are going to deploy agents into production. We’re then going to monitor their effectiveness and reintroduce the next agent and the next agent. This is a repeatable process that customers understand — with us operationalizing and managing that environment for you, you can focus on business value. That’s really the essence of the new Rackspace.”
Joe Vito: “The other piece of this — this entire business runs on GPU and quite frankly CPU. What chips do you need? When do you need them? We’re actually taking that on. We’re working with our partners and determining what’s the right chip capability you need. In some cases we’re also looking at capabilities that manage where you can use CPU and don’t need GPU. The chip capability is becoming as big an issue as anything else. If we can take that away, if we can take questions like how do I engineer it, how do I manage it and operationalize it, and customers can just focus on the business — this is what customers are telling us.”
Regulated Industries, Sovereign Cloud, and Compliance-First AI Architecture
For healthcare organizations, financial services firms, and government entities, the path to production AI is complicated by data sovereignty requirements, audit and compliance obligations, and the risk of deploying models that change effectiveness over time. Rackspace’s private cloud model is designed to address these constraints at the infrastructure level, giving regulated industry customers a dedicated, governed environment that removes the compliance exposure of public cloud AI deployments.
Q: What are the unique challenges regulated industries face around data sovereignty and compliance, and how does Rackspace solve for those?
Joe Vito: “In terms of compliant models and how we ensure that, we work with the customers. Much of these platforms are either aggregating content into an ontology — a Palantir model or a Uniphore model — which is saying we’re going to come in with curated SLMs, meaning these are SLMs very specific to use cases. They’ve been regulatory compliant. Other customers have leveraged similar capabilities, and now you build your agents on proven SLMs. The compliance of that obviously starts from the infrastructure all the way up to the data. In these environments, you have to manage who has access to this data. How does that evolve over time?”
Joe Vito: “A lot of our customers in healthcare don’t want their data in the public cloud. So we expand their data center into a private cloud. It’s their content in their cloud. That takes away some of what compliance audit and risk folks are worried about. We take that off the table because this is a dedicated solution for your capability. How we think about going forward managing and governing — a lot of that is going to be the effectiveness of the model and how well it’s actually working. And the interesting thing people are about ready to face is that once you deploy these models and they’re working, they may change effectiveness or the effectiveness may degrade. Is it time to upgrade them? Is it time to maybe shut one down and deploy a new one? We’re giving customers the ability to think through that because we’ve taken care of all the other stuff — operationalizing, managing the environment. Now you can just focus on the compliance and auditing of models, data, and agents.”
Cloud Modernization for AI: Private Cloud vs. Hyperscaler Tradeoffs
As enterprises look to modernize their infrastructure to support AI workloads, they face a choice between two fundamentally different paths: migrating to hyperscaler public cloud environments, or building a governed private cloud on a foundation like VMware Cloud Foundation (VCF). Rackspace makes a direct case for the private cloud path — specifically for customers who need faster time-to-value, lower disruption to existing operating models, and the ability to deploy AI without a multi-year cloud migration program.
Q: How are you seeing cloud modernization play out for AI at scale — and is the modernization happening because of AI or alongside it?
Joe Vito: “Our private cloud is built on VMware Cloud Foundation. A lot of customers are already in the VMware ecosystem virtualization. Broadcom is taking their customers to VCF, to a private cloud. We’re simply delivering that capability. So they’re modernizing onto a VCF or a private cloud footprint. That is different than modernizing into a hyperscaler environment. In a hyperscaler environment, things take years. You’re not only modernizing applications, you’re moving them into a public cloud environment, and that changes your operating model. With Rackspace, the private cloud model has less engineering of modernizing the apps, less changing of your operating model because we’re operating the environment for you. You can focus on it with a shorter timeline and smaller investment and get AI value.”
Joe Vito: “With some of the larger public cloud plays, these things take quite a while to roll out and execute, and you’ve got an entirely new operating model. And people are starting to focus on this: when you deploy agents, agents are becoming part of your operating model. So you have human and now technological resources running. That’s going to take a lot of thought as we evolve this. Rackspace is also working in that space, talking to partners around that, and we’re trying to determine how best we guide customers in helping them evolve their operating model as more of this capability deploys.”
The Uniphore Partnership and the Full AI Workload Ecosystem
Rackspace’s March 2026 partnership with Uniphore — the Business AI company backed by NVIDIA and AMD — introduced what both companies describe as an Infrastructure-to-Agents architecture. The joint offering integrates Uniphore’s Business AI Cloud (spanning inferencing, data preparation, small language models, and industry-specific AI agents) with Rackspace’s governed private cloud, targeting $100 million in enterprise AI deployments. Joe Vito provides context on how Rackspace thinks about AI workload types and why a single partnership is not sufficient to serve an enterprise’s full AI journey.
Q: Talk about the complexity of stitching together the full AI stack — data, inference, agents, managed services — and how Rackspace manages that across brownfield deployments and existing partner software.
Joe Vito: “People talk about AI workloads generically. There are different types of AI workloads. We’re very deliberate on who we partner with because what we’re trying to do is build a breadth of partnerships on top of our operationalized private cloud that will support all of the AI workloads the enterprise is going to be challenged with as they start to modernize on this new technology. This is not a neocloud play. Yes, we can do inferencing, and yes we can support that, but we don’t want to support that as a transactional business. We want to support that as part of an enterprise value proposition.”
Joe Vito: “If you look at AI-driven outcome platforms — the Palantir and Uniphore world — they’re slightly different. They target slightly different customers, but generally speaking, these are AI outcome-based agents. You also have inferencing workloads, you have fine-tuning workloads. What we see with our partnerships is that customers are going to engage across different workloads, at different points in their journey, and those workloads will also start to drive each other. What do I mean? You may not have a lot of inferencing need right now, but if you deploy 20 agents, your inferencing needs are probably going to go up. Your fine-tuning needs are probably going to go up. There’s no one platform that delivers all that capability.”
Joe Vito: “We are looking at an ecosystem of partners in the AI workload space that can support where you are on your journey, what AI workloads you’re deploying, and what’s the interdependence between them. We’re also very active in asking how what we’re doing with Uniphore and Palantir integrates with inferencing needs — so we don’t want customers having to, every time they have a new AI workload dimension, go out and find something else. No. Stay with Rackspace. We have the partnerships to deliver all those workload capabilities. We are building this now with our partnerships. We’re excited about giving the enterprise one place to shop for all their AI workload needs.”
CIO Guidance: Where to Start, How to Accelerate, and What Actually Sustains AI at Scale
With 25 years of experience as a CIO and technology transformation practitioner — spanning roles at AWS, Dell-EMC, UBS AG, Dun & Bradstreet, and US Trust & Merrill Lynch — Joe Vito speaks directly to enterprise technology leaders navigating their AI journey in 2026. His advice is specific, grounded in operational experience, and cuts through the noise of AI market hype.
Q: If a CIO or CTO is watching this and is somewhere in the middle of their own AI journey — investments made, expected outcomes still fuzzy — what is your honest advice?
Joe Vito: “If you want to build momentum around AI technology, you have to accelerate AI business outcomes. That’s obviously the underlying value proposition which Rackspace brings to the table. That said, if you want to sustain them operationally, managing the ecosystem of partners is a value proposition because the sustainability of that is us — it’s not you as the CIO and your company. CIOs have so much work in front of them. They’ve got legacy data platforms that have data captured into them that have to make it into these AI platforms. They’ve got existing AI technology. Sometimes they’ve got multiple platforms — whether it’s a Databricks or Snowflake, and maybe they’re doing some stuff with hyperscalers.”
Joe Vito: “I think sometimes too much federation of AI and data doesn’t speed things up. It slows it down. Because all of this business runs on data — as with other prior technology waves, and I’ve seen four of them over the course of my 40 years — it’s all about data and getting data rationalized and in a state where you can produce value. We are thinking differently with our CIOs about: let’s accelerate the data to the AI platforms. Don’t worry about operationalizing it, managing it — worry about agents, worry about business outcomes. The business will feed itself. Because what’s going to happen is the more agents you deploy, the business is going to get this.”
Joe Vito: “We’re also seeing AI is not just a platform. We’re seeing AI operating systems now — with Palantir taking on more modernization of platforms and not just running agents. It’s an agent-based modernization. Understand the differences in AI workloads. Understand where the market is moving in terms of agents and deployment. Figure out how to get there faster. You don’t have to own everything. In fact, most organizations can’t afford to. We’d love to make the case why a private cloud deployment of AI is not only more cost-effective, but will also be less disruptive to your organization as opposed to going the hyperscaler route.”
Joe Vito: “Rackspace believes this is a game of partnerships and operationalizing those partnerships — operationalizing the ecosystem is how you move faster. We are convinced of it. The new Rackspace is investing in this heavily. And the partners get it. We work regularly with Dell, VMware, Palantir, Uniphore. They all understand we’re more valuable together than we are individually. That makes my job a little easier — and I think it’s because we recognize we become more valuable to our customers and the CIO.”





