Why would you want to pay for a Kubernetes distro to a vendor like Red Hat, or even Rancher and then operate it yourself when you can just call an API and create an EKS cluster straight on Amazon?
Rancher Labs is a leading Kubernetes player that builds software that helps enterprises deliver Kubernetes-as-a-Service across any infrastructure.
We sat down with Rancher Labs co-founder & CEO Sheng Liang to reflect on the consolidation happening in the Kubernetes space and what does it mean for Rancher Labs.
Here is a lightly edited version of our video interview with Liang.
Swapnil Bhartiya: Year 2019 witnessed many mergers and acquisitions in the cloud native space. What do you make of it?
Sheng Liang: I think all this consolidation and M&A (merger and acquisition) in our space is pretty much an indicator of how fast the market has matured. Technologies like Kubernetes and Docker used to be mainly appealing to tinkerers and the mega scalers like Google and Facebook; there was very little interest outside of that group of people. But the mass market, enterprise adoption of these technologies created new markets. It suddenly created a huge opportunity for a lot of players who entered the market.
Swapnil Bhartiya: Looking at these changing market dynamics, what does future look like for Rancher Labs? What are your long term and short term goals?
Sheng Liang: A lot of these things happen beyond our control. But our long term goal has never changed. We remain steadfastly focused on building the business. I’m certainly not against acquisition or being acquired, but it happens when it happens.
The point whether the company is going to go public or it’s going to get acquired, is not important. What’s really important is the fact that the company got to have a meaningful business, which is, we have to build a differentiated product that addresses the market needs. That’s what something that Rancher is focused on and that’s what made us the success we have been so far.
Swapnil Bhartiya: As compared to the early days when the users were tinkerers and hyperscale companies, today almost everyone is using Kubernetes. There are now use-cases that you didn’t even think of as ideal use-cases for Kuberntes. So, what unique challenges do you now see, which you have to address as a solution provider?
Sheng Liang: I’ve never been in a market or technology evolution that’s happened as quickly and as dynamically as Kubernetes. When we started some five years ago, this space used to be much more crowded and a lot of our peers didn’t make it because for one reason or the other. They weren’t able to adjust to the change or they chose not to adjust to some of the changes.
I will give an example. In the early days of Kubernetes, the most obvious opportunity was Kubernetes distro and Kubernetes operations. It’s a piece of new technology. It’s known to be reasonably complex. It’s reasonably complex to install. It’s reasonably complex to upgrade. It’s reasonably complex to operate.
There were just a stampede of vendors trying to rush to provide solutions to that problem. We knew it at the time, as soon as cloud providers like Google decided to make Kubernetes as a service offering, essentially for free, the upside for the business of actually operating and supporting Kubernetes were going to be very limited
There was a bright side to it too. Due to the arrival of cloud vendors, despite its complexity, Kubernetes was now within the reach of those organizations who were slow to adopt it due to all that complexity. We knew that Kubernetes was going to become ubiquitous, It was going to be an industry standard. So we were one of the very few companies that were able to see one step further than anyone else and we realized that Kubernetes was going to become the new computing standard, just like TCP/IP became the networking standard.
What we really saw was the opportunity Kubernetes brings. Kubernetes today, is available on all major public clouds, it’s available for on-prem hardware. We have projects like K3S, which bring Kubernetes to hardware like Raspberry Pi, set top boxes, surveillance cameras, wind turbines and so on. In a nutshell, where there’s compute there’s going to be Kubernetes.
The great thing is that it’s not just processors or memory, it’s the whole cloud Native platform that allows you to deploy anything from a simple website all the way up to the latest and greatest big data analytics and AI software.
Basically you have almost the power of the whole AWS platform at your fingertips. And it’s portable, essentially it can go wherever Kubernetes can go.
At Rancher we were able to build a unique, at the time we call it a multi cluster, management platform. But it is really an enterprise computing platform. It is the first place that enterprise admins and dev ops engineers come when they want to deploy their application on any type of infrastructure – from public cloud to private cloud to the edge. It allows them to control access, ensure reliability of that infrastructure and all of the applications. It makes all that stuff work for their internal folks like devops teams and IT operators.
Swapnil Bhartiya: As there is consolidation happening, we are also seeing new faces. Companies like CISCO and Linode are now offering Kubernetes. Many of these could be your competitors, whereas many would be your potential customers and partners. So what does it mean for Rancher Labs?
Sheng Liang: You’re so right. I will give you an example because obviously Rancher historically had offered a Kubernetes distro, so that put us in competition with things like PKS and OpenShift. We still do compete with them today, but increasingly, where we see our big opportunity is really no longer providing Kubernetes distro for the data center or for the cloud. In the cloud, we always recommend our customers to use EKS, or GKE. Why would you want to pay for a Kubernetes distro to a vendor like Red Hat, or even Rancher and then operate it yourself when you can just call an API and create an EKS cluster straight on Amazon? And Amazon would actually operate it for you for free. It’s just a no brainer. From our perspective, at the Kubernetes distro layer, the main opportunity where we can add a lot of value is at the edge.
To us, the edge is something that’s just not in the cloud, it’s not something that you get at Amazon, Google or VMware. It’s places where people are running maybe one node, maybe a small cluster of nodes, maybe it’s a cdn point of presence, maybe it’s a wind power generation plant, maybe it’s a cruise ship, maybe it’s an aircraft. Literally, we’ve got customers working with Kubernetes in all of these environments today. So, it’s tremendously exciting in terms of footprint. I would venture to say it dwarfs any kind of data center and cloud presence we’ve seen today. And that’s something I’m very excited about.
The other opportunity is obviously with Kubernetes clusters, whether it’s GKE, EKS, PKS, OpenShift, or K3S deployments. Ultimately, there needs to be a centralized control plane to manage them. And that’s what Rancher is and we’re going to continue to build it up and Rancher is really becoming the next generation enterprise compute platform that’s built on the standard compute protocol called Kubernetes.
We are going to continually invest in that. We’re getting tons and tons of traction where our business last year. In 2019, our annual recurring revenue grew by 169% in comparison to 2018. We’re literally way north of triple digit growth and I just don’t see any sign of that growth slowing down. So by staying one step ahead of the rest of the industry, we’re still trying to fight that Kubernetes distro game.
As of today, we’ve moved way beyond that. We’ve pushed Kubernetes to the edge and created the centralized enterprise Kubernetes management platform that can scale to hundreds of thousands, even millions of clusters. The kind of scale that we’re talking about I would have never fathomed just a couple of years ago. So that’s what we see the market opportunity is and I think I’m really excited about it.
Swapnil Bhartiya: What are the use-cases you are interested in, or are driving your growth?
Sheng Liang: I think to be fair, today a vast majority of the actual production deployments we see are still the more classic use cases – Kubernetes in the data center and Kubernetes in the cloud. That’s like 80-90% of the actual deployments. And that has showing no sign of slowing down, because the fundamental problem is that these clouds are not compatible with each other and Kubernetes solves it very well, Kubernetes is a great application modernization solution, it’s a great hybrid cloud solution, it’s a great onboarding solution, it’s a multi cloud solution.. So that’s where we still see the real driver behind our business.
But we’re seeing some newer and more interesting use cases too as I kind of touched on. I think edge is really interesting, because those are the platforms that people traditionally don’t see there’s even a need or room for Kubernetes and the reason for that is quite fundamental. Let’s consider a surveillance camera as an example. Now if you think about a surveillance camera, these things historically didn’t even have a lot of computing power. But that’s changing. If you look at the kinds of stuff people want to run on surveillance cameras you will be surprised. They want to run maybe object recognition, maybe face recognition. In the old days you would stream that stuff into some kind of environment – maybe into the cloud and have it all processed there. But you don’t want to do that today due to many reasons – latency, privacy and so on. It’s becoming a bit of a challenge.
Today, users want to process it locally, so now you think about what kind of software they would use, they’re basically kind of using the AI software, or the data analytics software that we’re all used to.
So all of a sudden, that stuff is happening inside a camera. These days a lot of these software packages are containerized and they’re deployed as a Kubernetes workload. So very naturally, with K3S, since we’re able to shrink Kubernetes all the way that it could fit into something as small as a Raspberry Pi, with 512 mb of memory, even these cameras have more than enough capacity to use Kubernetes as an app orchestration platform, as opposed to container or infrastructure orchestration platform.
Now all of a sudden you have these Kubernetes engines running everywhere. Now you need a fleet manager to tie them all together, to make sure they upgrade it, make sure they’re patched, check for their health.
With the current generation of Rancher, we can scale to 2000 clusters, which is already quite remarkable. With the next generation of Rancher we’re talking about scaling to over a million clusters. It’s a big focus for us this year, I’m super excited about it.
Swapnil Bhartiya: There’s such an explosive growth of Kubernetes that you don’t even know what will be the next use case. Which also means that you interact with a lot of customers who come to you with problems that you realize the community still needs to solve.
Sheng Liang: I would say at this point I’m not as worried about installing and operating Kubernetes anymore, because of things like self-hosted Kubernetes and solutions like EKS, GKE and K3S. I don’t think it’s as big of a problem anymore.
But I think on the consumption side, it’s still quite challenging. I think with enough education, the basic concepts of Kubernetes are definitely getting more and more acceptance. It’s not like in the past, where every Kubernetes talk had to start with explaining what a pod is, what is a sidecar and so one. You don’t really have to do that anymore.
At least people are getting the basic concepts of Kubernetes. The problem is that the ecosystem is not slowing down. There are new technologies and concepts. You now have Istio, Knative, Tekton and so one. By the time you think, things are getting stabilized, you get hit with a whole new different set of concepts and you have to catch up with them.
That remains a big area of focus for Rancher. We created a project called Rio where the whole idea is that a developer, a devops engineer, doesn’t really have to read 10 different manuals or go to class for weeks just to get productive. They can get productive on day one and just focus on solving the business problems and creating business logic.
I think Rio is the first step in that direction and you’ll see a lot from us going forward. Rio’s still in beta but we are evolving it really quickly. We are bringing some of its features into Rancher in the upcoming release. I think that’s going to continue to be a big challenge going forward.
I think at one point, most people wouldn’t even see Kubernetes at all. They’ll only interact with Kubernetes through CICD or other tools. BUT, I’m not sure that’s necessarily going to be the future either, so I think that chapter’s still being written, and you should be hearing more about it in the coming year from us and also from the community.