Cloud Native ComputingDevelopersDevOpsFeaturedLet's Talk

Kong Co-Founder Talks About Adding Support For Windows And More…


Kong is a cloud connectivity company that is reinventing the way developers build software in today’s increasingly hybrid, multi-cloud world. Kong’s service connectivity platform enables developers, architects and operators to seamlessly connect all of their services to deliver unparalleled digital experiences and accelerate application development cycles.

Kong recently organized their Kong Summit and the focus was on Cloud Connectivity. In this episode of Let’s Talk, I sat down with Marco Palladino, Co-Founder and CTO at Kong Inc., to talk about the event, cloud connectivity and more. One of the major highlights of our discussion was the support for Windows with the Kong Mesh 1.5 release, “We can mix and match Windows with containers, with Kubernetes, as well as with Linux virtual machines. It’s quite exciting, especially if the intended target for this feature is the enterprise architect that must support Windows as one of the additional platforms for the application,” said Palladino.

Here are some of the topics we covered in the show:

  • What is cloud connectivity and why is it important for DevOps or developers team today?
  • Major announcements at Kong Summit.
  • What strategy Kong has with Kong Gateway to support multiple service meshes as there are a host of service mesh projects?
  • What role is service mesh going to play in the Edge use case and what is Kong doing in this space?
  • Added support for Windows with the 1.5 release of Kong Mesh.
  • Plans for next year?
  • Who is the direct competitor of Kong?

Guest: Marco Palladino (LinkedIn, Twitter)
Company: Kong Inc. (Twitter)
Show: Let’s Talk


Swapnil Bhartiya: Hi, this is your host, Swapnil Bhartiya, and welcome to TFIR newsroom. And my next guest is Marco Palladino, co-founder and CTO at Kong Incorporate. Before we jump into the show, I would like to thank our sponsors, Linode, who have been sponsoring multiple shows, including TFiR Newsroom, and Let’s Talk. Thank you, Linode. Now, let’s go and talk to Marco.

Marco Palladino: Thank you for having me, again.

Swapnil Bhartiya: Right, this week it was Kong Summit and your team made a lot of announcements and they’re all focused on cloud connectivity. We have talked about cloud connectivity earlier, but I just want to hammer that point once again. So tell me what is cloud connectivity and what is important for DevOps or developers team today?

Marco Palladino: Well, it’s the underlying foundation of every application that we’re going to be building. Applications are getting decoupled, but if they are decoupled, then connectivity is how to communicate with each other. That connectivity becomes very important at this point. In the monolithic world back in the days, we didn’t have as much connectivity because everything was running in the same code base. That was the definition of a monolith. And so now connectivity is critical to make sure that our applications can reliably, securely communicate with each other. And in the full stack connectivity, it is not only the one that operates within the services of an application. It’s also the one that allows us to then enforce governance at the edge or outside of the application, if we want to enable cross application, mobile communication on our APIs. So connectivity really is a broad use case that includes both the gateway use cases which are on the edge, and application use case, as well as the service management use case.

Swapnil Bhartiya: What other announcements you made at the summit?

Marco Palladino: Well, we made lots of announcements. I’ll start with the first one. The first one is the Istio Gateway. So we added support for Istio as an additional service management platform that congregate with supports. This allows us to essentially supercharge Istio’s native Gateway Load Balancer and add full life cycle API management on top of that. This gives us control on how we wanted to expose APIs outside of Istio, and of course Kong Gateway already supported Kuma, so these essentially introduces support for the second service mesh that Kong Gateway can support. We have announced when it comes to service mesh, we have announced Kong Mesh 1.5 which is a new release of our enterprise service mesh built on top of Kuma. It ships with Windows support and role-based access control. We’ve announced Kong Gateway 2.6, which improves the performance of the gateway by 12%, reduces latency by 30%, performance of course is critical.

There is much more back and forth that our applications are going to be doing. We are now at Kong Gateway 2.6, which improves performance by 12% and reduces latency by 30%. Of course, performance is very critical when it comes to the type of applications that we’re building and with all the back and forth that microservices generate, making sure that Kong is always the fastest gateway, the most performing gateway out there. It is very important. We did announce a new version of the English controller 2.0. So, as you can see, there is a lot, a lot that went into the summit. Lots of product announcements, and the most important part, each one of these announcements is available for streaming. So anybody can go on and look at the keynote, look at the product announcements. Lots of customers were part of this Kong Summit. At a keynote, we had the CIO and CTO of NASDAQ, Brad joining us from FarSite Chat, but we also had customers like American Airlines. So it really was a packed event when it comes to product announcements and customers.

Swapnil Bhartiya: I want to go back to some announcement that you made, and one was about the service mesh. You mentioned Kuma, Istio, there are a lot of other service mesh. What are your strategies with service meshes? Are you going to stop all of them? Because when I look at Kong, you are trying to help customers. So you want to be with the customers where they are in their journey. So talk about what kind of strategy you have for service meshes, because there are so many, there is still there Linkerd, and then of course there are so many others.

Marco Palladino: Yeah. Well, of course, the goal for Kong Gateway it is to support every service mesh that the customers adopt. From a gateway standpoint, the gateway makes no assumption as to where the underlying APIs are being hosted. And this has always been part of the philosophy of Kong Gateway. So Kong Gateway supports every cloud, supports Kubernetes, supports VMs, and supports the major service meshes that our customers want us to support. So we do support Kuma in Kong Mesh, we do support Istio with this new announcement. When it comes to the service mesh evaluation per se, of course we believe that if anybody’s looking for a service mesh and they need to implement a service mesh, and they’re doing an evaluation right now, so they haven’t chosen a service me yet, they should probably take a look at what Kong Mesh and Kuma can provide, because it’s very compelling compared to the first generation service meshes.

It provides built-in multi cluster distribution, multi-zone, it supports VM out of the box. With the Windows support, we just announced, we also support essentially 13 official distributions. It’s a service mesh that was engineered for the enterprise architect that must support all of these different environments. Now, whatever service mesh at the end of the day the user decides to use, Kong Gateway is going to be supporting them all when it comes to exposing those APIs and giving some governance on how those APIs are being exposed.

Swapnil Bhartiya: You talked about first generation service mesh, second generation. There are some new use cases, or I mean, it depends on environments that are also emerging. One very good example is edge use cases, and there are lot of lightweight Kubernetes that are being used here. What role is service mesh going to play in that space? And what is Kong doing for that? Where is edge in your radar?

Marco Palladino: Well, so we’re looking at three different types of connectivity that every team has to address for. There is the traditional connectivity at the edge for are external partners and mobile applications and ecosystems and developers. There is going to be connectivity inside of the organization, across different applications. And there’s going to be connectivity inside the applications themselves as they get transitioned to microservices. So at the edge, across the apps, and within the apps. Service mesh provides a smart grid if we wish, to connect all of this traffic very well within the application.

So the last connectivity use case. But it doesn’t really give us the right governance in place to be able to determine how we want to expose our APIs, how we want to near different users with different, let’s say rate limits. How we want to then create an onboarding flow to allow our users to be able to explore and discover these APIs, and being able to provision keys so they can consume those APIs. So all of these, which is more in the context of user management and governance on how we want to expose those APIs, these are more traditional API management functions, if you wish. Service mesh today they don’t give us that governance. That’s why API management and service meshes can be used together to provide these full stack for our service connectivity.

Swapnil Bhartiya: Kong Mesh 1.5 also supports windows. It seems like a big deal. So talk about the support and what does it mean for developer ecosystem or teams?

Marco Palladino: Well, so we do have an architecture that allowed us since day one, to support as many distributions as possible. We used to call Kuma the universal service mesh because it was everywhere since day one, right? So this was innovation in the service mesh landscape. No other service mesh has been built with this underlying abstraction that allows them to support all of these different environments. So Kuma was quite unique in the way it was built. Architecture was built for these kind of use cases, and so adding Windows support essentially allows any architect who has teams working on Windows stacks, .NET Engineers, .NET products and so and so forth, to be able to add those applications and those microservices as part of the mesh in a Windows zone. Which means that we can run the data plane, proxy Windows, but also the remote control plane, the zone control plane, on top of Windows.

If you remember Kong Mesh as a primary control plane, the global, and then has as many secondaries as we need what we call the zone control plane. And so essentially with the Windows support, we support Windows on both the control plane and the data plane proxy. And once it’s up and running, workloads that are not running on Windows can start consuming and discovering Windows workloads and vice versa. So we can also mix and match Windows with containers, with Kubernetes, with Linux virtual machines. It’s quite exciting, especially if the intended target for this feature is the enterprise architect that must support Windows as one of the additional platforms for the application things.

Swapnil Bhartiya: What plans do you have? We’re almost at the end of the year, but what plans do you have next? What other exciting things that are happening within the Kong Company and Community?

Marco Palladino: Well, so we crossed and in the keynote we mentioned is, we crossed 400 enterprise customers. We’re growing quite fast, but these are large organizations, top Fortune 500, top Global 5,000, that are really implementing our technology to change the life of people, to change how finance works, to change how travel works. Right? So as a founder, I’m very excited to see how this technology’s being used in the real world. With these 400 enterprise customers, not only we support them from a product standpoint, but we also support them from partnership standpoint. We have increased 4X the size of our customer success team over the past 12 months in order to be able to address all the requests for customer enablement, for training, for education, we do have something called Kong Academy, which is our training and learning platform.

So we’re really building an organization here that is going to be supporting our customers from a product standpoint, from a technology standpoint. And that is the innovation that Kong has created throughout the years, but most importantly, from a partnership standpoint. You see, implementing technologies like gateway service meshes, that work never stops. It’s a partnership, it’s long term, and we want to support it. Now, when it comes to our products, I’m very excited about all the things that we’re doing with Connect. Of course, the technology and service meshes and gateways are downloadable on premise, so anybody can run a self-managed version of our software.

But even more importantly, we launched Connect a year ago, we’re seeing lots of adoption on Connect. We are making sure that everything is one click away, to give that one-click experience that we are familiar with with other products like Datadog, or MongoDB Atlas, or Elastic Cloud, and so on so forth. So we are making lots of investment into cloud to remove the operations from the equation. We really want our customer to focus on their output. We want them to focus on the applications they’re building and everything else, they can partner with us to remove it from the equation.

Swapnil Bhartiya: Excellent. And this is a question you can answer either diplomatically or as bluntly as you can. If I ask today, who is a core competitor of Kong? Who do you see as your competitors?

Marco Palladino: We compete with many of the first generation API management products that existed since a very long time. Companies like MuleSoft, they have an API management solution, Apigee, these are technologies that were born in the first era of API management when mobile became a big trend. Right back in 2007, 2008, 2009, everybody was scrambling to build mobile applications or to build ecosystems of developers like Facebook API did, like Google API did, and they needed an API management solution at the edge to make that happen. The problem is that those API management solutions are monolithic because they were born in a monolithic world. So when the world changed with containers and with Kubernetes, when we needed a decentralized gateway or a decentralized service mesh that allows us to connect our APIs in a more performative way, in a more lightweight way, in a more extensible way, while those products were not born for this new world, and that’s why we built Kong.

Swapnil Bhartiya: Excellent. Thanks for explaining that. And Marco as well it’s always fun to talk to you, and I would love to have you back on the show. Thank you for your time today.

Marco Palladino: Thank you so much.