Akamai, a cybersecurity and cloud company, is fast emerging as a major player in the cloud-native and Kubernetes ecosystem. Since acquiring Linode, the company has been steadily increasing its contribution to the cloud-native world.
At the recent KubeCon + CloudNativeCon hosted in Salt Lake City, Utah, the company announced the Akamai App Platform — a ready-to-run solution that makes it easy to deploy, manage, and scale highly distributed applications.
In this episode, Ari Weil, VP of Cloud Computing at Akamai, joins me to talk about the origins of distributed cloud that culminated in the development of the Akamai App Platform. Weil said that its customers recognized the potential of Akamai’s robust, globally distributed content delivery network (CDN) and questioned why they couldn’t deploy applications on it instead of just content. Several years ago, Akamai worked with Apple to enable iCloud Private Relay by deploying containers on its platform and managing that product on its edge. That was Akamai’s first foray into deploying customer code onto its edge network.
A key milestone in Akamai’s journey was the acquisition of Red Kubes, which contributed Otomi technology as the backbone for the Akamai App Platform. According to Weil, this platform is designed to accelerate value realization and improve scalability for enterprises leveraging Kubernetes.
Weil discussed the challenges organizations face with Kubernetes deployments, such as vendor lock-in and high operational expenses. The Akamai App Platform, built on open-source Kubernetes and CNCF (Cloud Native Computing Foundation) projects, aims to simplify the deployment process and facilitate easier migrations. Weil also emphasized Akamai’s distributed cloud approach, which utilizes its extensive global network to ensure low latency and regulatory compliance.
Looking ahead, the platform’s roadmap includes capabilities like AI inference, distributed databases, and verticalized use cases, such as hyper-personalization for the retail sector. Weil highlighted Akamai’s commitment to open-source contributions and its active engagement with the cloud native community..
To deep dive into Akamai’s strategy for Cloud Native, Kubernetes and distributed cloud, watch the full interview above.
Guest: Ari Weil (LinkedIn)
Companies: Akamai (Twitter)
Show: Let’s Talk
[read more]
Questions discussed
- What is Akamai all about, especially in the context of KubeCon?
- What were the drivers behind the creation of the app platform?
- Why are we talking about distributed cloud, and what value is Akamai bringing to this ecosystem?
- How does the ecosystem see Akamai as a player in this space?
- What kind of internal culture is Akamai building to encourage developers to engage with open source communities?
- What role is the app platform playing in the space of generative AI?
Akamai’s New App Platform at KubeCon
- Swapnil Bhartiya introduces Ari Weil, Vice President of Cloud Computing at Akamai, and asks about Akamai’s role at KubeCon.
- Ari Weil: explains that Akamai specializes in cloud computing, cybersecurity, and distribution, and they are announcing a new app platform at KubeCon.
- The app platform aims to help companies realize value faster and scale more efficiently using Kubernetes.
- Weil highlights the challenges companies face with Kubernetes, such as proprietary implementations and high people costs in scaling.
Challenges and Opportunities in Kubernetes Deployment
- Weil discusses the challenges of using proprietary services in Kubernetes, which can lock companies into a single platform.
- He explains that scaling Kubernetes at enterprise levels can be complex and costly, leading to delays in time to value.
- The app platform from Akamai aims to simplify Kubernetes deployment and scaling, enabling new business opportunities.
- Weil emphasizes the benefits of decoupling Kubernetes from any given cloud, allowing for greater flexibility and scalability.
Value of Distributed Cloud and Low Latency
- Weil explains the importance of low latency and location-specific compliance in distributed cloud environments.
- He provides examples of applications that require low latency, such as games, e-commerce, and intelligent copilots.
- High latency can lead to revenue loss for retailers and reduced efficacy for copilots.
- Distributed cloud allows compute to be applied where it is needed, improving user experience and compliance.
Simplifying Kubernetes Deployment and Management
- Weil discusses the ease of using managed services on cloud platforms but highlights the challenges of proprietary implementations.
- He explains that switching costs can be high, requiring significant changes in operations and re-skilling of employees.
- The app platform from Akamai uses open-source Kubernetes, making it easier to deploy and scale workloads.
- The platform includes four projects, two open-source and two proprietary, to simplify deployment and management.
Akamai’s Vision for Distributed Cloud
- Weil describes how Akamai’s customers inspired the development of the distributed cloud platform.
- Akamai has a globally distributed network with core data centers, distributed data centers, and points of presence.
- The platform allows for self-managed, fully managed, or a mix of both, depending on the customer’s needs.
- Akamai’s expertise in scaling and orchestration is leveraged to manage containerized applications on their network.
Collaboration and Contribution to Open Source Communities
- Weil mentions a keynote with Fastly and Microsoft to discuss ecosystem collaboration.
- Akamai has been contributing to CNCF projects like OpenTelemetry, Argo, and Prometheus.
- The company has announced gold sponsorship of the CNCF to support open-source projects and provide service credits.
- Akamai engineers actively participate in open-source development, bringing enterprise use cases and needs to the community.
Internal Culture and Developer Engagement
- Weil highlights Akamai’s long-standing relationship with developers and focus on contributing to internet and networking standards.
- Akamai has contributed projects to the community, such as efficient memory management, to benefit the entire ecosystem.
- The company encourages developers to engage with open-source communities and become good contributors.
- Akamai’s history with MIT and its focus on cloud computing and edge computing are key to its developer engagement strategy.
Role of AI and Generative AI in Cloud Computing
- Weil discusses the potential of AI and generative AI in specialized, domain-specific copilot implementations.
- AI can help developers identify scale and performance issues early in the development life cycle.
- Akamai’s security apparatus already uses AI for API security checks and performance testing.
- The app platform will orchestrate AI-powered applications, making it easier for developers to build and deploy AI solutions.
Future Directions and Upcoming Announcements
- Weil teases upcoming announcements from Akamai, including more distributed database types and AI inference.
- Akamai has deployed specific GPUs for AI inferencing workloads and is building an ecosystem of partners.
- The company aims to reach more airs and locations with its distributed compute platform.
- Akamai will focus on verticalized use cases like hyper-personalization for retailers and distributed data management at scale.
Closing Remarks and Future Collaboration
- Swapnil Bhartiya thanks Ari Weil: for the insights and looks forward to future conversations.
- Weil expresses gratitude for the opportunity to discuss Akamai’s work and future plans.
- The conversation ends on a positive note, with both parties looking forward to future collaborations.
Unedited Transcript (Note: the text is AI generated, it has not been edited or reviewed. It may contain errors, including incorrect names. It’s provided here under Creative Commons license (CC by 4.0) to be used by bloggers, journalists and analysts for creating their own content.)
Swapnil Bhartiya: This is your host Swapnil Bhartiya and we are here at KubeCon + CloudNativeCon in Salt Lake City, Utah. And today we have with us, once again, Ari Weil, vice president of cloud computing at Akamai. Ari is great to have you back on the show in person.
Ari Weil: That’s right. It’s fun to see you in person.
Swapnil Bhartiya: First of all, just sort of quick reminder for viewers, what is Akamai all about, especially for the context of KubeCon this event.
Ari Weil: so. Akamai is a company that specializes in cloud computing, cybersecurity and distribution, and what we’re doing here at KubeCon is announcing our new app platform, which is a way for companies to realize value faster and scale more efficiently and effectively using Kubernetes.
Swapnil Bhartiya: Can you talk about what were the driver behind it? What are the pain points? Challenges? Sometimes it is challenges. Sometimes it’s going to the next step that led to the creation of this app platform.
Ari Weil:Absolutely, I think so. We’ll start with some of the challenges. You know, many companies want to use Kubernetes as a platform to build their application strategy on it allows them to realize portability, more efficient scaling, and really it’s a way to maintain the promise of portability, at least in theory, from cloud to cloud, the challenge becomes, first of all, when you use a proprietary implementation on a cloud platform, you’re using proprietary services, so the portability really isn’t there. Now, many companies are comfortable with that if they want to create everything for their application on one platform, but if your strategy, if your users, or if regulation force you to make a change, they find themselves locked into a platform because they’ve used all of these proprietary services. That’s one challenge. Another challenge comes when a company really looks at how they need to scale and grow their footprint with Kubernetes, because it’s a mature platform, but it’s not necessarily mature in the sense of how easy it is to operate and scale it at enterprise scale. And so what we hear from companies is they start delaying their time to value and really having a very high people cost and scaling Kubernetes, and they wanted to have something that would simplify that process. And then if you think about the opportunity, if you can decouple their Kubernetes deployment from any given cloud, and you can make it easier to deploy and scale it, then you can enable them to create new business opportunities, whether it’s moving into a new geography, adding a new capability, or using the specific capabilities of a given cloud platform, the application platform from Akamai helps them realize all three of those benefits.
Swapnil Bhartiya: Why we are talking about distributed cloud, and what value is Akamai bringing to this whole ecosystem.
Ari Weil: So distributed starts to become really important when you think about two critical factors, one is latency, and the other one is location specificity, which maybe you would call compliance. But let’s think about low latency first, if I’m building a game, an E commerce application that has to apply personalization or an intelligent copilot as an example. In all of those cases, data is being created at the edge or at the touch point, and the decisions need to be made there. So if somebody is generating content or engaging with my application, and then I have to do a round trip that might take a second or two seconds to complete, then it doesn’t feel like a consistent or a good, even user experience for whoever is interacting with you. And that comes at a cost for a retailer. It will come at a revenue cost for anybody who’s doing a subscription or a loyalty based service, it’ll come at the cost of that customer loyalty or lifetime value. And if you just think about the efficacy of a copilot, copilots need to make instantaneous decisions to be valuable, and so that’s one reason why distribution allows you to take your compute and apply that workload where it needs to operate. So that’s one thing that we’re doing with distribution. The other one is, if we think about use cases where data needs to be stored and managed where it’s created, that’s anything from personally identifiable information management and complying with some of the privacy laws to being able to control the flow of intellectual property to also just complying with your own governance initiatives distribution and gives you compute where you need it. And basing that distribution on a container platform means I can drop a Kubernetes container anywhere that I might need a localized application or workload to run, without worrying that I’m violating some tenant of that. Data Management.
Swapnil Bhartiya: Does it like they have to do some extra effort, extra work, and if they do, how does a Kumar or even if you look at the ad platform, eases that pain
Ari Weil: On any sort of a given cloud. If you’re comfortable consuming managed services and proprietary services, because you want to be all in on that cloud, then the only thing that you really have to watch is how much basically you build on your application, and where those costs and the inefficiencies might crop up. But many companies prefer the simplification of a managed service on a cloud, where we’ve been hearing from many companies is they actually want to use their people and manage their people costs versus managed services cost, and that’s where proprietary implementations can become a challenge, because they do lock you in, because it makes it really easy for you to spend very heavily on proprietary services. And anytime you want to make a change that switching cost is incredibly expensive, because I have to change my operations, I have to change my scripting, I have to re skill my employees, and if I have any sort of regulatory context around my app or my workload, I have to recertify what I’m building. And all of that is hugely expensive and time consuming. So what we’re focusing on with the app platform is saying, let’s use regular open. Source Kubernetes. We’re not changing it. We’re not creating proprietary wrappers around it. You can use optimize, manage Kubernetes engine, if you’d like to. But the app platform is based on the CNCF project, and then as we want to scale and add services to a workload deployment, we will use the CNCF projects as they are, so we’ll make it easier for a developer to deploy. We’ll go from months of deployment time down to minutes with the app platform, because we’ve built in all of the orchestration and the APIs for that deployment, but you’ll preserve the ability to migrate that workload anywhere that you want to if you want to. So with the app platform, it’s comprised of four projects. Two of the projects are open source and they’re licensed under Apache. The other two are proprietary to Akamai because they’re built in with our platform. They’re using our services, but they’re not integral to the functioning of the app platform. They just make it easier for you to deploy on Akamai, and that same code for the console and the API can be replicated by any company for any cloud. Now
Swapnil Bhartiya: if you go to the question about distribute cloud once again. How did you folks start getting started, and what vision you have for the future, and how you’re seeing the journey in the middle
Ari Weil: where this all started was our customers were looking at our content delivery network and saying, you’ve got this incredible globally distributed backbone with all this capacity and performance built in. Why can’t we, instead of just putting our content on it, actually deploy our applications? And several years ago, we actually worked with Apple to enable iCloud private relay by deploying containers on our platform and managing that product on our edge so that was the first foray of our deployment of customer code onto our edge network. Since then, we’ve had a number of other customers approach us for similar use cases, and so what we’re doing now with our distributed cloud build out is we have core data centers. We’ve got 28 of those. We’ve got distributed data centers. There’s 10, and then we have 4200 points of presence for our CDN around the globe. On those 4200 points of presence, we’re now scaling a managed container service so our customer can give us their containerized workload. Our Services team will learn how to operate and manage the application, and then we will do a fully managed containerized application on our network. And so the scale of distribution is totally self managed from core to distributed, or fully Akamai managed for you on our edge, and then, depending on your latency requirements, the scale and the reach that you need for your application, you have the flexibility to choose self manage in the core and the distributed or fully managed and globally distributed on our edge network. And in all cases, there are concerns about, is it something that I have to relearn? What is the orchestration going to be like? How will I scale the benefit of building a cloud onto a CDN means that all of our expertise in automatically scaling up and scaling down, being able to deal with failing machines and fail over, understanding what your workload tolerance needs to be for the users you’re trying to reach, and then orchestrating the networking to get you there, is where Akamai is bread and butter has been for the last 25 years, and so as we look at where we’re going for distribution, it’s enabling more of that automation. So basically, you give us a workload, you tell us what users you’re trying to reach with it, and what your tolerance is for delivery, for latency and for scale, and then we will orchestrate that for you, and that’s where we’re headed over the next three years. Is that full orchestration, without you having to manage the infrastructure yourself.
Swapnil Bhartiya: How does the ecosystem see Akamai as a player in this space?
Ari Weil: It’s a great question. So we’re actually going to be doing a keynote together with team from flat car and Microsoft on Friday, talking about this ecosystem and how we’re all collaborating together to enable the community. But if we look at the way that Akamai has been, ever since we acquired Linode, and we’ve started to build it into the Akamai platform to create our Akamai connected cloud. Our engineers have been contributing heavily to several CNCF projects, open telemetry, for example, Argo, Prometheus. These are the projects that we rely on for our own applications that we use to serve our customers. And as we start to think about growing our cloud. Our focus has been we know that our customers want to build on Kubernetes as a platform. We use it and we build upon it ourselves. It makes sense for us to contribute back to the community to make sure that we are enabling the maintainers of Kubernetes, just like we want to enable the maintainers of some of these other open source projects, and that’s why earlier this year, we announced the gold sponsorship of the CNCF and the cloud native foundation, so that we could contribute service credits where it makes sense, so that if the problem that a project might have with scaling is simply access to compute resources, we would give them that access at no cost to them. But we also found that it was better to work together with the community and have our engineers contributing code, bringing the context of the enterprise, security, scalability and performance use cases that we were seeing from our customers, because it also helps the CNCF maintainers of these projects understand where they can take their own commercial applications for their projects, and also how to make a more robust open. Source project for the whole community to leverage. And I think there are several large corporate entities that start to understand, like Akamai has, that this is the way to contribute to and enable the community. It’s by actively participating in the development in standards and then, where possible, making our own resources available for the maintainers to scale.
Swapnil Bhartiya: Can you also talk about what kind of internal culture you folks are building at Akamai to kind of encourage developers to not only get engaged with open source communities, but also become good cities and good contributors as well.
Ari Weil: So we’ve got several engineers, in fact, earlier this year, we contributed a project to the community that was all about how to do more efficient management of memory, because it was something that we needed for our own scaling of our edge computing capabilities. But we thought this is something that the entire community can take advantage of. And in fact, several large companies immediately contacted us and said that we had solved problems in the scalings of their applications. Some of them are our competitors, but from our perspective, the rising tide lifts all boats. If you’re going to contribute to the community, then you have to faithfully contribute into the entirety of the community. You can’t say, I want these companies to use my code and not the others. And so ever since its inception, Akamai was born out of MIT, we have had a tight correlation and a tight relationship with developers, and our focus has always been on contributing to not only the architectures and the standards of how the internet and how networking should evolve, but now with cloud computing, the use cases and the reference architectures for the ways that we can start to evolve delivering applications at scale, especially as we start looking at shifting compute to the edge. Because Akamai knows a thing or two about the edge, and we’ve been learning very, very quickly about cloud. And what we wanted to do is to merge those things together and say that we can bring all of the benefits of global scalability, of automatic scaling and failover and being able to take advantage of commodity hardware and commodity projects in order to create something novel and unique in the marketplace, and with the addition of cloud, and then distributing all of that and combining it on a global backbone, as we have, but showing other businesses how they can take advantage of the same architectures and run their specific workloads. Now, using an underpinning of Kubernetes, is the glue that will hold everything together. We really feel like this is something that the entire community will benefit from. And you know, so far, it seems like the community agrees here at the show.
Swapnil Bhartiya: For the last couple of shows, there are a lot of focus on Gen AI, generative AI. And, you know, we have had this written pass out. So can you also talk about when you look at some of these new now, we can look at AI, Gen AI as a workload, or we can also look at enabler, you know that you will be using it back end. What role is app platform playing in that space.
Ari Weil: So I think the real promise for developers that AI holds, that generative AI holds, is, if you think about specialized, domain specific, copilot implementations, I want to have an AI as a developer that is going to highlight for me where I might be building something that’s going to have a pinch point from a scale perspective, or where the performance can’t possibly be what I hope it would be as I start to build and deploy in my deployment pipelines, I want to have unit and system tests that are going to tell me you’ve either got an API that you architected that is outside the bounds of your standards, or when I try to run a performance test, I can see that this component of your architecture won’t scale. We’ve already got aI powered and Gen AI capabilities in our security apparatus. So when we look, for example, with our API security product that came from no name, the acquisition of no name, they also will apply certain checks to your API infrastructure to show you if you’re exposing too much data, if you haven’t implemented the right governance on your API. And then we’ve got partners that are helping us with API context to show you the speed and the performance of your API. And so for us, bringing AI copilots throughout the development life cycle is really important, and where we think the app platform will play a role is the more that some of these projects are building AI into the way that they are providing value to developers. Then they will get that in concert, orchestrated through the app platform, and then they can build the AI powered applications that they’re looking for much more easily and much more effectively.
Swapnil Bhartiya: Of course, you know, when things are ready, we’ll talk about that so, but I still just just just tease it. You know, when it comes to the whole cloud computing, what are the things we can expect from Akamai? So we have to just be careful not to pre announce anything, but just tease us, Hey, these are the things in the pipeline.
Ari Weil: So we are working very heavily on more distributed database types that we want to add to the platform in concert with the app platform itself, so we can give developers more flexibility in the sorts of workloads that they handle. We’re going to be leaning a lot more into AI inference in the coming year, we’ve already deployed some very specific GPUs that are designed for AI inferencing workloads that we’ve been building out the ecosystem of partners that will help our customers actually build and distribute those workloads at scale, so we’ll see those in the coming year. We’ve also been working on continuing to build out our platform so that we can reach more air more locations with our distributed compute platform. Them. And then I think you know from our perspective, the more interesting thing is when we start looking at verticalized use cases, like hyper personalization for retailers and the way that you can do inventory management and distributed data management at scale, these are some of the things that you will see announcements from us covering over the coming months and quarters.
Swapnil Bhartiya: again, thank you so much for joining me today. And of course, talk about the work that Akamai is doing in this space. Thanks for so much. Thank you so much for great insights. And as usual, I look forward to chatting with you again.
Ari Weil: Thank you so much. Thanks for having us on the show, and it was great to finally meet you.
[/read]





