AI/MLContributory Expert VoicesDevelopersDevOpsOpen Source

How Kata Containers Empowers AI Users To Secure Their Data And Models Using Confidential Containers Technology


Author: Treva Williams, Kata Containers Community Manager
Bio: Prior to joining OpenInfra Foundation as technical community manager for the Kata Containers project, Treva has been an online instructor, systems administrator, & tailor. Along with singing the praises of Open Source, Treva is also a dedicated Diversity, Equity & Inclusion advocate, speaking as often as permitted on the subject.

Kata Containers has undergone more than five years of development, with several architectural changes implemented along the way. Initially designed to protect infrastructure, the platform has grown to also safeguard container workloads, which led to the creation of the Confidential Containers (CoCo) project. Born from the merger of two existing open source projects — Intel Clear Containers & Hyper runV — Kata Containers brings together the best of both technologies with a common vision of retooling virtualization to fit container-native applications in order to deliver the speed of containers with the security of VMs. Kata Containers delivers an enhanced end user experience in both performance and compatibility, unifies both the developer communities, and accelerates feature development to tackle future use cases.

Confidential Containers — an outgrowth of the container isolation feature in Kata Containers & currently in the CNCF Sandbox — enables confidential computing by leveraging TEE (Trusted Execution Environments) to protect containers and data. The Confidential Containers Kubernetes operator integrates existing TEE infrastructure support, which, along with other key security features, allows Cloud Native application owners to enforce better application security requirements by enabling the protection of in-use data by performing computation in a hardware-based TEE. This collaborative approach has enabled both the project and community to benefit from a diverse range of expertise and perspectives.

Since its inception, Kata Containers has continued to evolve with contributions from various organizations and individual developers. It has gained features, performance enhancements, and broader platform support, making it a compelling option for securely running container workloads. Most recently, Microsoft announced its use of Kata Containers for pod sandboxing and zero trust environments on Azure Kubernetes Service (AKS).

Kata Containers has positioned itself as a solution that combines the benefits of both virtual machines and containers, offering strong isolation while maintaining the efficiency and speed of container-based deployments. Its history showcases the collaborative nature of open-source development and its ability to address emerging challenges in the technology landscape.

During the keynotes for the last OpenInfra Summit, Ant Financial Group engineers and founding Kata Containers contributors Xu Wang and Tao Peng co-presented a eye-opening session explaining Confidential Containers with a live Kata Containers demo.

Wang and Peng used the keynote session to present a high-level overview of the logic behind  CoCo, demonstrating its capabilities with a live demo featuring a language model based on Meta’s LLaMA open-source model. The demo showcased Kata Confidential Containers’ ability to enable users to securely store sensitive data and models on third-party hosted hardware.

Wang initiated the session by introducing the latest upgrade of Kata Containers, version 3.0. This upgrade streamlines the components of Kata Containers by blending the hypervisor into the newly written Rust runtime. As a result, there is only one process for each pod on the host, whereas in Kata Containers 2.0 there were at least two processes. This modernization helps to reduce the deployment complexity and memory footprint of Kata Containers.

Following that, Wang proceeded to provide an overview of the live demo architecture, featuring an integration of Kubernetes, Containerd, and Kata Confidential Containers. A key broker service was utilized as a standalone entity to assist in attesting the confidential environment. The container image was encrypted and stored in a standard Docker container registry.

The demo was run on an Alibaba Cloud server with AMD SEV capability to encrypt the guest memory so that the workloads running inside Kata Containers cannot be read from the host.

Subsequently, Peng initiated the live demo by launching a new pod with kubectl. The creation of the pod took around 30 seconds, with a significant part of the time spent on pulling the container image due to the larger size of the LLM image. Following this, it was demonstrated that the pod could not be accessed using kubectl exec since the exec API is blocked by kata-agent. Despite this, it was possible to log in to the pod using a pre-configured SSH key and proceed with model serving.

During the live demo, a chatbot-like application was used that accepted prompts and produced responses in the command line. Wang posed a few inquiries, such as “explain the theory of relativity” and “write a bubble sort program in Go”. Additionally, he prompted the AI model to compare two Kata presentations at the OpenInfra Summit based on their session titles. The AI model responded quickly to these questions, which delighted the audience and resulted in several rounds of applause.

The live demo was executed flawlessly, leaving the audience thoroughly impressed with the AI’s capability to comprehend and answer such complex questions. Additionally, the demo effectively demonstrated how Kata Containers is empowering AI users to secure their data and models using Confidential Containers technology.

Kata Containers lives under the Apache2 license, hosted on Github — and, in the spirit of the Four Opens, anyone can contribute code, docs, reviews, or anything else they would like to offer to enhance the project.

If you’re interested in staying updated on the latest Kata Containers news, have feature suggestions, or are seeking top talent, the Kata Containers Slack channel is an excellent platform for engaging with the community. You can sign up and join the conversation using this link. For technical assistance, the #kata-dev channel is your go-to resource, while more casual discussions happen in #general.

If you’re more of an IRC person, find us on OFTC at #kata-dev. This channel hooks into Slack so you’ll be able to reach all of the same folks, just on a different platform. If you’re more of a face-to-face communicator, join the weekly Architecture Committee meetings, which usually happen every Tuesday at 1500 UTC, and serve as a hub for sharing project updates, feature requests, demos, and more. You can find upcoming agendas and recordings of past meetings on our official Kata Containers Etherpad.

We also maintain a playlist of use cases, presentations by esteemed members of the community, and more on the official OpenInfra YouTube channel. There you will find the entire OpenInfra Summit Vancouver keynote as well as other riveting demos, interviews, use cases and more from the OpenInfra community.

Join us at KubeCon + CloudNativeCon North America this November 6 – 9 in Chicago for more on Kubernetes and the cloud-native ecosystem.