Guest: Pavel Despot (LinkedIn)
Company: Akamai Technologies (Twitter)
Show: Newsroom
Akamai Technologies recently announced the opening of three new cloud computing sites to meet the needs of modern applications that require higher performance, lower latency, and global scalability. This increases its distributed footprint to more than 42,000 locations and 134 countries.
In this episode of TFiR: Newsroom, Pavel Despot, Senior Product Marketing Manager at Akamai Technologies, talks about this recent announcement and shares his thoughts on the current market trends.
Highlights of this video interview:
- In February, Akamai talked about its vision of having distributed data centers with infrastructure services and storage services that are easy to consume and connect to their platform. This week, it announced three new data centers in Washington, D.C., Paris, and Chicago, which are already live and in use by customers.
- Customer demand, certain places that are going to be easier and quicker to build just by virtue of where they are. But there are also some strategic drivers.
- Washington, D.C. was selected because it already has existing scrubbing centers for its DDoS solution in that area. Paris has one of the highest data center densities in the entire world and is an important location for data sovereignty across Europe. Chicago is a big low latency backup for New York/Philly/D.C. areas.
- The other two that are coming up are in Chennai and Seattle. That makes 16 computing sites, with the plan to have a total of 23 by the end of 2023.
- There are new workloads and applications all over the world that require new compute, whether it’s games, new video services, new ecommerce services. There are some workloads that need to execute in very localized places for low latency.
- When you want to distribute your inventory database across multiple locations and replicate for either resiliency or performance, it becomes very difficult because you have to manage the replication, networking, connectivity, developer access, load balancing, and replication time. That’s where the Akamai platform comes in. Having these locations married with the platform with the connectivity, and then start using some of the state-of-the-art technology from its partners can solve database data distribution problems.
- On the importance of distributed infrastructure: Akamai takes into account geopolitical situations when planning the network and backup connectivity. What would happen if they lost certain links into the platform, but specifically into compute? That is why it has 23 locations, including Europe and India. Even in places where there’s no official mandate, the market is asking for it. While GDPR and privacy are very complex, infrastructure alone doesn’t solve it. But it starts with infrastructure.
- On new workloads and applications: Despot says the newest is the data distribution piece: taking the CDN model where you took static content and shove it out to different locations for a variety of reasons, and now you take data, do the same thing, and add some compute there. If you can move the data out and slug this distribution layer, kind of like the CDN did almost transparently to browsers, then that becomes interesting. Once you do that, then your big data lake becomes a lot more usable and securely accessible. You have your distribution layer, and then the edge portion takes care of the usual security just like it has done with your API and CDN. That is a big change in architecture.
- On the evolution of Linode: The combination of Akamai and Linode enabled access to much larger customers and served those that need larger capacity, more bandwidth, more locations, and reduced egress fees. Linode with Akamai can serve both individual developers and higher end markets.
- On the evolution of the cloud: Despot says the reality is that we live in a diverse environment. AM radio is still around, and IBM still makes mainframes. Now, there are mainframes in the cloud. At first, the content was at the edge. Then, security policies, bot detection, AI-type analysis, and some functions. For some things, it’s better to move out. And out in this case, also means hybrid clouds.
- A lot of applications are going to take advantage of the distributed model in order to scale cost effectively.
- AI and large language models carry a lot of weight, but by and large, it depends on the infrastructure, connectivity, and abstracting that away you can get to the workloads instead of the plumbing. That’s where the ultimate idea of these distributed sites comes in.
- Its new global load balancer, which will be out in beta soon, allows customers to select between local and global load balancing across Akamai’s network and sites. You just put in an IP and fully qualified domain name (FQDN). You do not need a target group or abstractions about zones and learn its nomenclature. It’s a virtual private cloud (VPC), give it a hostname. If it’s outside of your VPC, it will route it. If it’s inside your VPC, wherever you have your VPC defined, it will route it.
- Even though Akamai has a network/telco background and Linode has a cloud background, they complement each other. They both have open standards. They just have different words for the same thing.
This summary was written by Camille Gregory.