Classic Cloud Foundry deployments continue to present numerous challenges and the price you pay for getting it wrong can be detrimental. Cloud strategies are changing with organizations wanting to move towards something based on Kubernetes. However, navigating these difficulties can be problematic. In this episode of TFiR Let’s Talk, Swapnil Bhartiya sits down with Julian Fischer, CEO and Co-Founder of anynines, to discuss the key challenges for classic Cloud Foundry deployments.

He says, “The challenge is how do you take from the infrastructure provider what gives or helps you to generate value as quickly as possible while maintaining that healthy balance to be able to adopt additional, or maybe even migrate to different infrastructure vendors for costs or legal reasons? That’s one of the key challenges in building platforms. If you make a mistake there, except too much of a vendor lock-in, the costs in the midterm are way beyond imagination.

Key highlights from this video interview are:

  • Organizations that have invested in Cloud Foundry and run applications on top of it are now deciding whether to move towards something based on Kubernetes. Fischer explains the direction they are going with setting up Cloud Foundry on Kubernetes internally initially and some of the challenges with classic Cloud Foundry deployments.
  • Fischer describes how anynines leverages open source technologies to make it consumable by the enterprise; however, getting more modules related to Kubernetes to then help organizations fit them in within their organization does not necessarily change or conflict with anynines long-term strategies.
  • There are two key challenges that affect users of classic Cloud Foundry, making informed decisions when building a platform for organizations by monitoring different technologies and knowing the organization. Secondly, making sure what you take from the infrastructure provider enables you to generate value as quickly as possible.
  • Fischer discusses the pricing models and how they are changing in the market. anynines offers operational credits and license credits with automation falling under license credits and operational services coming under operational credit. He explains how this helps organizations adapt platforms over time.
  • There are key drivers for why enterprises want to migrate to Kubernetes. One is that Kubernetes is so ubiquitous but still comes with challenges of running a multitude of Kubernetes clusters. Fischer discusses how they provide automation to provide operational efficiencies as close to large Cloud Foundry platforms as possible.
  • Fischer shares his insights into the right approach for adopting Kubernetes. He explains the importance of having operational efficiencies in place and having a centralized governance around building application development platforms within a larger organization with local autonomy.
  • One of the challenges of vendor lock-in is risk, not only with infrastructure but with data services vendors also. Fischer describes the scenarios of how vendor lock-in can affect organizations and how they can mitigate that risk to avoid damage to the business.
  • Fischer discusses how technical debt is challenging organizations to migrate. He explains how if you are a Cloud Foundry user, then you learn to build packs and how this helps keep the knowledge in the technology and not in one person. He believes that the foundation of declarative operational models also helps enable quicker learning with more freedom.
  • Since cloud strategies are evolving, anynines’ offerings and services are going to change too. Fischer feels Cloud Foundry is going to play a central role in this evolution. He explains the communication challenges in exchanging information with services from external vendors and from ones within organizations and tells us how anynines marketplace is helping with this challenge.
  • Fischer goes into depth about mitigating the infrastructure lock-in risk, and the challenges of being infrastructure -independent and managing tenants. He discusses the best practice of having identity management of a particular infrastructure provider if you don’t have a primary identity server. He tells us how anynines UAA can help.

Connect with Julian Fischer (LinkedIn, Twitter)
Learn more about anynines (Twitter)

The summary of the show is written by Emily Nicholls.


Here is the automated and unedited transcript of the recording. Please note that the transcript has not been edited or reviewed. 

Swapnil Bhartiya: Hi, this is your host Swapnil Bhartiya, and welcome to another episode of TFiR Let’s Talk. And we have with us one again, Julian Fischer, CEO and Co-Founder of anynines. Julian it’s great to have you on our show.

Julian Fischer: Great to be here again. It was wonderful [inaudible 00:00:14] you in Spain. So now we are virtual again. I missed the physical presence.

Swapnil Bhartiya: That’s so true. Especially, the vibe that you feel when you sit with someone across the room. That is totally different than doing it remotely, but I’m also kind of grateful to this technology that it does enable us to, even if we are sitting in two different continents, we can see each other, talk to each other in real time. So this is incredible. Now let’s talk about… Let’s go back to the interview. Can you talk about a lot of things that have been happening in the industry? We have been talking to anynines and the Cloud Foundry ecosystem is evolving and changing. Can you talk about, because you folks also deal with the users of plastic Cloud Foundry, what are some of the pain points that you see that users are facing because of the change in the industry?

Julian Fischer: Well, organizations that have invested into Cloud Foundry and that also build platforms that have technical adoption. So they’re running applications on top of it. They’re currently asking themselves what to do and where to go. We’ve been discussing this topic for several times over the past years. And we keep on providing updates with it. The interesting thing is that the story looks a bit different for each organization you’re talking to. For example, it’s different if you have a very small organization, the number of applications is relatively low. You may have an opportunity to just move to Kubernetes. However, if you really bought into the Cloud Foundry user experience which has very interesting upsides, for example, you can manage the security and policy with build packs much better than with container images.

Although there are possibilities to do that as well. So, if you want to preserve that CF push experience, the question is, is it now time to move away from classic Cloud Foundry and maybe use something that’s based on Kubernetes? And our current assessment to this is we are setting up Cloud Foundry on Kubernetes internally. We are addressing internal clients before we publish anything on our public platform, and that’s happening before we actually added to our anynines platform offering for on-premise clients. So we are in that first stage and there have been a lot of changes over the past years. And if you think about KubeCF and its successor project Cloud Foundry for Kubernetes, which adopted Kubernetes much more than KubeCF did. And then the new approach, that’s even more Kubernetes native than Cloud Foundry for Kubernetes. I think that’s heading in the right direction.

However, if you consider a large Cloud Foundry environment in contrast to a smaller one, and you still come to that problem where there’s not a one to one mapping between a large Cloud Foundry environment, which may host, let’s say 10,000 applications or 5,000 applications, and a single Kubernetes cluster based Cloud Foundry successor. We don’t see that at the moment. And even if the technology isn’t at that point, we still need time as a community to also build an operational practice around that new technology, that’s solid enough, so that a migration may take place. We don’t believe that this time is now. We believe we’re getting there. So while we’ve been talking, things has been moving in the right direction. But at this point, it’s still too early to migrate those large platforms. So we see our classic Cloud Foundry business still grow.

And a lot of these business are in that situation where their increased cost pressure and dealing with commercial vendors asking for license fees for their Cloud Foundry, distributions or something that we usually help to solve by giving them an open source based Cloud Foundry with full remote operations and all the utilities they need, so they can save cost right away. And then we help them to figure out a tailored platform strategy that will work for their organization over the next five years.

Swapnil Bhartiya: Many talk over the next five years. If I may ask you, if you look at anynines itself, what is your long term plan or long term strategy as well?

Julian Fischer: Well, the good thing is we don’t have to change strategy at all. We’ve always been leveraging open source technologies and making that technology consumable by the enterprise. So it’s basically about wrapping open source technology in automation, getting experience in how to operate it, so that enterprises can do that. So building an operational model that’s enterprise grade for open source is what we do. And if you think about the anynines platform, we add new modules, for example, we add modules for on demand provisioning of Kubernetes cluster, or for maintaining workloads on Kubernetes clusters for data service automation. It’s basically what we did with Cloud Foundry too, but with new technologies. So the future is basically getting more of those modules, more of them will obviously be related to Kubernetes and then help organizations to fit those modules within their organizations.

Swapnil Bhartiya: Excellent. Earlier than that, I would talk with [inaudible 00:06:16] later. I want to go back to the point of some of the pain points. When we look at those users of classic Cloud Foundry, there could be technological challenges. There could be people challenges because there is shortage of supply and demand of folks that do know about these technologies. And third is the pricing challenge, as well, as you said, the pricing models are changing. So which of these challenges are the really not deal breakers, but since they’re really big challenges that you feel that’s where folks need help, or these are all important and how’s anynines helping them?

Julian Fischer: Well, the thing is that informed decisions when building a platform for particular organizations requires monitoring a lot of different technologies and also knowing the own organization very well, and then combining the two. So what we help is being a mediator between the technology and the organization, and we help clients with best practices to find their tailored platforms solution. Just to give you one example, if you build a platform, one of the strategic decisions any organization has to make is to decide how much infrastructure vendor lock-in day one to accept, for example, you can get a Kubernetes cluster from every infrastructure these days. However, let’s say tie all your Kubernetes cluster life cycle management to a particular API, so you have an organization, you know that you’re going to have 100 or 200 or 500 Kubernetes clusters over the next few years.

You also have to think about how you organize lifecycle management. So you are then looking for tools that will allow you to declaratively describe such a cluster, including its workloads and extensions you may want to install, and make that happen for the engineers in charge of that. So how do you do that without tying into the infrastructure APIs too much? I mean, in the client base we have, it’s been a repetitive issue that managers have been very, very certain that they will go with that one particular infrastructure provider. And within the time range of two to three years, nearly all of them had to move at least to adopt additional infrastructure providers because customers wanted so, or because for legal reasons, expanding into let’s say other territories. And that means you need to be open for that step. And then the challenge is how do you take from the infrastructure provider what gives or helps you to generate value as quickly as possible, while maintaining that healthy balance to be able to adopt additional, or maybe even migrate to different infrastructure vendors for costs or legal reasons? That’s one of the key challenges in building platforms. If you make a mistake there, except too much of a vendor lock-in, the costs in the midterm are way beyond imagination.

Swapnil Bhartiya: Right. Can you just like quickly talk about, just stick to the topic of cost yourself and pricing itself? How are you kind of structuring yourself and making it easier for folks? Can you talk about that because the pricing models are changing in the market?

Julian Fischer: Yeah. So our pricing model is basically that we offer so-called operational credits and license credits. So any automation that we provide is basically under license credits where any operational service is under an operational credit. So the idea of that is that your platform or the platform of any organization will evolve over time. So you are adding components, removing components, shifting workloads, exploring new infrastructure territories, and so on. So the idea of license and operational credits is that you can purchase those credits, but underneath, pick the modules from the internet’s platform you want and exchange both the automation, as well as the operational services. So this helps to adapt platforms over time, which is something that naturally has to happen.

Swapnil Bhartiya: Perfect. Now I want to talk about Kubernetes quickly, as you talked about earlier that internal as well as external clients. So when this comes to folks migrating to Kubernetes, can you talk about how much really velocity traffic you’re seeing there, that folks are planning and what is driving? It is really based on the fact that they do need to move to Kubernetes, or it’s also based on the market trends that, “Hey, Kubernetes is where the future is.”

Julian Fischer: Well, there are both scenarios. It is a common scenario that certain development departments use Kubernetes because it’s just so ubiquitous. It’s just everywhere. That comes at the cost of operating workloads on Kubernetes with Kubernetes cluster, and assuming that there’s a multitude of Kubernetes, clusters are not as operational efficient as running, for example, a single Cloud Foundry. Well, those statements, you always have to see the context. For example, if you have a fist full of Kubernetes clusters, you are all into one infrastructure vendor, and they are not equal when it comes to how comfortable it is to run Kubernetes. So let’s pick for example, Google Cloud platform, which is rather comfortable in creating Kubernetes clusters. For example, in contrast to AWS, where it’s more of a pain. So if you’re comfortable with that API vendor lock-in, then managing a fist full of Kubernetes clusters isn’t a big problem.

You can definitely solve it, and running applications there is fine. However, if you have thousands of them, a Cloud Foundry with its CF push experience and it’s underlying bush, is a very nice thing to have, especially if you don’t want that vendor lock-in, and that’s still the case. And then there are the departments within those organizations that have special requirements. For example, they have legacy applications that don’t fit the 12 factor model. They’re not cloud native, they store data in the file system, and they have other ties that somehow, let’s say they are lift and shift applications taken from virtual machines or physical machines even, I think that’s Kubernetes territory. If you for example, have a workload that you purchased that comes with Kubernetes as an assumption, as it’s operational model, let’s say the alternative to what used to be in appliances, Kubernetes is the first choice.

So that’s what I’m saying is one of the things we need to do with our clients is an analysis of the workloads they have on average. That usually falls into different categories, and for each category, a technical solution can be drafted that provides an operational model with a sustainable economy to it. And then the decision for a technology is made, including Kubernetes. And also not about whether to use Kubernetes, but also how should a Kubernetes cluster look like, because it’s different whether to use vanilla Kubernetes, that’s just a plain Kubernetes cluster, or let’s say having a Kubernetes cluster template that comes with various kubernetes extensions, for example, the service mesh, a set of operators, locking and metric collecting extension, so that the developers deploying to this Kubernetes cluster already have like they are mini platform that’s tailored to their particular needs.

And this is something that we focus on in anynines these days is talking to clients, making those analysis, what are their workloads looking like? What do their developers need? And then help to provide the automation that the life cycle management of these clusters with a growing number of them provide operational efficiencies that are as close to large Cloud Foundry platforms as possible.

Swapnil Bhartiya: Excellent. And as folks are experimenting for embracing Kubernetes, do you also see that they are either making a mistake when you see the patterns or is there a… You would suggest the right approach, right way of embracing or adopting Kubernetes, what would that be?

Julian Fischer: Yeah, that’s actually a good question. So the adoption of Kubernetes is often, well, not always, but often rather naive one. It starts somewhere at the lower levels of an organization, where individual departments somehow get hands on Kubernetes clusters somehow, and they start to make their own experiments. If you let that happen as an organization over time, you’ll have different Kubernetes adoption in different departments, using Kubernetes in a different way, and also having emotional ties to using Kubernetes. I mean, that’s similar for a lot of different technologies and the drawback of that, just let that happen is that if there’s no policy, for example, around where the Kubernetes should come from, or what are the operational standards, every team has to maintain to uphold those operational efficiencies to avoid generating a lot of costs or waste, in terms of wasting people’s time.

If that is not there, then waste and grease cost is unavoidable because you’re looking at complicated and complex integration projects, where also you have to, it’s not only the technology side then, but also those teams are getting emotionally attached to their habits and have their choices. So without guidance, these problems become problematic from both the technological as well as the human perspective. Therefore, especially after five years of experimenting, then the attempt to consolidate cloud platforms into a single strategy becomes very conflict, burdened projects that are hard to carry out. They are not impossible, but they’re uncomfortable. So it’s always good to have a kind of centralized governance around building application development platforms within a larger organization with local autonomy to carry out experiments, but to feed back their learnings.

It is for example, very important that if there are deviations from a company-wide policy standard, that these deviations are to be augmented and that there’s reasoning behind it. So for example, if you have a department that works with, let’s say Kubernetes clusters that run on machinery somewhere in a plant, rather than a centralized data center, the management of dependencies of applications, for example, the question where to run data service will be answered very differently from a centralized platform where they are already virtual machine based data services automation run alongside Kubernetes clusters. Therefore, going with and using operators would be less of let’s say a preferred choice. It’s not impossible then, but it’s maybe not as motivated. You’re not as motivated to do so. So those are the context specific things you need to consider. And that’s something where clients need to have experience on the CIO level and on the platform team level.

But if these organizations are still experimenting, that knowledge isn’t there, or it’s still disconnected. So there’s something happening on the top at the CIO level. And there’s something happening at the engineering level, and they’re disconnected for too long. And then you run into those integration problems and conflicts as well, because if those two layers meet at some point, that’s when you do a lot of meetings. To get around, this is to follow best practices in building application development platforms early on. That’s our advice. It’s not always possible, but it’s saving a lot of time and money.

Swapnil Bhartiya: Right? I go back to the point that you were talking about vendor lock-in when it comes to choosing the vendor. Another thing is that applications can come and go. They can run wherever the most available asset companies they do have is their data. But the fact is that clouds, they have data gravity. So how do you look at that problem? So that once again, it doesn’t matter which cloud vendor you choose, actually, once you get login, so that folks can easily if they change their mind or whatever business requirements are, they can move around very easily.

Julian Fischer: That’s an excellent point. And it’s one of the pain points in managing that infrastructure or that general vendor lock-in risk. It’s not only about infrastructure, as you mentioned, also some data service vendors, they turned out to be, let’s say creating a lock-in. So any vendor lock-in, whether it is the infrastructure that gave you a lot of discount in the early days and now taking away and removing those discounts gradually, that would be one motivation to have an alternative presence so that you can split your workloads and have an alternative to a greedy negotiation partner, I’d say. But if you look at the data layer, for example, if you buy into an infrastructure specific vendor, well, then you’re not only having the problem that you may have to buy a license, but it’s also, you have the problem that you cannot move into another territory.

So if you start in Europe or in the US, let’s say on AWS, and then you move to Asia, which is one of my favorite scenarios because it creates the impossible situation, that customers in Asia that they will have to use, let’s say Ali Cloud or any other infrastructure provider that’s closer to the government. And if you then do not have the technical ability to move your workload, you are not making any business there. And that would be a tremendous damage to the evolution of a business. So in that case, how can you leverage? How can you mitigate that risk? And we believe that any application developer platform, you never get around a lock-in, but the lock-in should be against open source based APIs.

We’ve recently had some significant, I would say damage to that open source ideology. After some of the open source vendors started to migrate to a non-open source license, the SSPL license would be one of the examples. I mean, there’s arguments behind that too, but in general, it is a safer path to go for such an open source technology, because even in the SSPL event, alternatives show up at some point, so that you’ll have more flexibility to move on. For example, our platform products are basically automation. So if your application’s tied to Postgres or they tie to Redis, they tie to an open source product. The fact that we do rapid automation that makes it easier to consume is not per se login. You can use any other technology that automates the lifecycle of Redis and Postgres, and you’d still be able to move your workload.

And that’s what I mean, you need to understand that certain lock-ins are much more problematic than others. And that tying to open standards is one way to mitigate it. Now, regardless whether which products you choose to get there, but tying your systems and forming contracts with those open standards is one of the golden rules I’d say.

Swapnil Bhartiya: Right. I want to look at a different kind of problem, which has nothing to do the vendors, which can come from the company itself, which is technical debt or tribal knowledge. As you’re talking with other companies, they start building things five years over the time. And as you’re talking also about that folks get emotionally attached to things that they’re doing. What happens is that you are building something for so long, that specific team or that person has all the knowhow or how things are done though. We are moving toward a lot of things. How much problem do you see when you look at the classic Cloud Foundry users, which comes through the tribal knowledge that was built within the team. And if that a person moves out of the company, they take that knowledge with them, or they build so much technical debt that is challenging for them to even migrate.

Julian Fischer: Well, if you’re a Cloud Foundry user, one of the first things you have to learn is how to use build packs. And build packs are technology on its own, that for example, are competing with other approaches to create container images, including using something like a docker file. That knowledge won’t go away. For example, you can use build packs with Kubernetes easily. There are several ways to do that. There’s Cloud Foundry and Kubernetes, there’s KF for Kubernetes. Just the ability to create your own container image in a CICD pipeline using build packs, and then use the container images as you would with other non build pack container images as well. So from that perspective, I would say, using Cloud Foundry means understanding 12 factor apps, means understanding cloud native application design, and that’s best practice on Kubernetes too.

The way you split responsibilities in Cloud Foundry, for example, there are the applications, there are the backing services is similar in Kubernetes. It’s just that Kubernetes provides you with more abstractions that you can, for example, also automate the life cycle of message queue and even the database, if you really want to. So, it’s not that you have, there’s your application run time, that’s Cloud Foundry. And they are well let’s say the internet data service that will integrate with their service broker into the marketplace. And Kubernetes, you can do that. And it’s meaningful, especially if you run Kubernetes cluster in a data center where you already have that virtual machine based automation, it makes more sense to integrate that virtual machine based integration to get a Postgres database for example, you’ll have better noisy neighborhood isolation for example, in contrast to an operator based a Postgres where you have pots.

So if you take that, a lot of the knowledge that engineers build in either of those technologies, they come to the same mindset, because it’s the underlying learnings from running software for decades. For example, the separation of the ephemeral virtual machine or container from the persistent disc or persistent volume is one of the key learnings. That is again, the foundation of declarative operational models, whether it is a Kubernetes state full set, or whether it’s a BOSH based deployment, they follow the same basic ideas. And as long as engineers grasp those basic concepts, they’ll have faster learning, whether you come from Kubernetes and go to Cloud Foundry, or the other way around. With Kubernetes being more flexible, there’s more to learn about Kubernetes, and there’s more freedom. It’s always like dealing with a two edged sword, you can cut yourself much more easily, but you have more freedom to do whatever you want to.

Swapnil Bhartiya: As you interact with these customers, can you also talk a bit about based on the discussion you’re having, because the platform it was market is changing, which will also kind of define or dictate the evolution of anynines services and offering that you folks have. So talk about that.

Julian Fischer: That’s a good question. The conversations with clients have changed. Talking to a Cloud Foundry customer a few years ago, they believed Cloud Foundry is going to be the centerpiece of their cloud strategy. Now, with that assumption going away, knowing that there are definitely things running alongside Cloud Foundry, and that there are environments where there is no Cloud Foundry at all. The question is, if you look at it, hundreds of Kubernetes clusters with different extensions and different workloads. These organizations, they need to consume certain services from external vendors, from infrastructure providers, but they’re also consuming services from within their own organization. So there’s a huge communication challenge in exchanging information about what the platform capabilities of the internal platform actually are, where to start looking into, let’s say, a particular service offering, whether it’s externally provided or internally doesn’t matter. As a consequence, because there has been this central marketplace in cloud found where it could have registered things, those services as open service broker API compatible services, but a single Kubernetes cluster doesn’t provide anything like it.

You could create CRDs and use the operator pattern to integrate services, but it doesn’t really help on the organization level for, let’s say, engineers to pick what are the tools that we are allowed to use. That, for example also have been crosschecked with internal security, because how can internal security still understand and ensure that what engineers do is in compliance with the organizational’s requirements. So one of the ideas is what we call anynines marketplace. It looks like a platform specific shopping cart, sorry to say, where you can register your products. Our products are preset, obviously, products provided by major infrastructure providers can be loaded into that marketplace, but also customers can add their own services. This marketplace can be branded with their own corporate identity. And then they can use this marketplace to tell their engineers, individual departments, individual regions, on which services have been cleared for them to use.

So then, there’s a product called, let’s say Postgres, and it’s based on virtual machine automation. There’s a bit of text on describing what it does. There’s link to documentation, some videos to learn, and maybe a link also to provision it. So it’s pretty much what you can find in the marketplace of the large vendors, except that it’s an open system. It comes prepopulated with some of the modules that are already present in our platform that we collaborate with, but it’s open for customers to be expanded to their particular needs, because to this point, nearly every client starts creating something like that from scratch and clients are very differently equipped with skills and knowledge and application developer teams who know how to build something like it. So we believe that’s an opportunity to share with the community, something that we’ve built and something that can become maintained by the community over the longer term. Solving the problem on how to communicate what engineers are clear to use, sharing best practices, sharing experience. That’s the idea of the anynines marketplace because we have this in nearly every platform account.

Swapnil Bhartiya: Perfect. Do you think you captured everything there or is there any?

Julian Fischer: There’s another aspect.

Swapnil Bhartiya: Go ahead.

Julian Fischer: If you think about mitigating that infrastructure lock-in risk a bit, you think about what are the things that you may take from the infrastructure provider, because it’s so tempting. I would say, keep in mind the idea, it is okay to accept the lock-in to an open standard. For example, Postgres is an open standard. However, well the details of a Postgres implementation are a different topic. You may have to adapt for example, the way Postgres creates backups. That’s something we do for customers a lot, because they cannot use off the shelf automation because it’s not compliant with their security policies, but beside of that, in Postgres there’s a bit of a specific example here. Another thing that is problematic is user management and authentication. Where do you have your primary identity server? Some of our clients have it as a standalone service.

So that’s their primary identity server. So every customer within the internal organization has its own identity in that identity server. And then that is integrated with the identity server provided by the infrastructure provider. So that you don’t tie into the infrastructure provider’s identity manager too much. And this is the best practice if you don’t have that, and you go all in with the identity management of a particular infrastructure provider, assuming it’s not based on standard APIs or any standard, that may also be one of the first problematic decisions. For the anynines platform, we came up with anynines UAA, which introduces for example, a tenant concept that is independent of the underlying infrastructure. So if you, for example, go to the infrastructure providers, they have different concepts to implement what a tenant is. And it’s pretty common that any larger organization needs to split into different tenants, for example, based on the business units or even based on teams within the business units and so on.

So you’re looking at a treelike structure, and then you have users, and those users they may also be, let’s say working in that business unit or on this team, but also in that team, but with different responsibilities. So that’s generally a problem that’s non-trivial to solve, but their software to do that. However, if you look into, let’s say for example, the identity provider that we choose to use for the anynines platform is key clock. And you can create users there. You also have realms, which appears to be something like a tenant, but it actually isn’t. So we decided to create a tenant service for the anynines platform that allows you to say, “Well, this is our business unit XY.” And you can then allow the business unit to own that Cloud Foundry organization, to own this and that Postgres instance, to have access to that AWS account and so on.

So it’s basically a tenant concept that’s independent of the underlying services and independent of the underlying infrastructure. And then obviously you have user and group management that ties into the whole thing. And if you happen to already have an identity manager, you can integrate it with that anynines platform identity manager, creating shadow users, as you would do integrating with other providers, for example, as you would’ve done with Cloud Foundry as well, if that’s an integration path you’ve ever seen. So managing tenants, being infrastructure independent, that’s one of the challenges we’ve seen many times, and it is something that we are addressing.

Swapnil Bhartiya: Julian, thank you so much for taking time out today, and of course talk about the Cloud Foundry, Kubernetes, anynines. And as usual, I would love to have you back on the show. Thank you

Julian Fischer: Well, thank you for having me again. It’s always a pleasure. See you next time, and until then, take care.


You may also like