Cloud Native ComputingContributory Expert VoicesDevOpsFeatured

Knative at 1: New Changes, New Opportunities

0

Authoer: William Markito Oliveira

This summer marked the one-year anniversary of Knative, an open-source project that provides the fundamental building blocks for serverless workloads in Kubernetes. In its relatively short life (so far), Knative is already delivering on its promise to boost organizations’ ability to leverage serverless and FaaS (functions as a service).

Knative isn’t the only serverless offering for Kubernetes, but it has become a de-facto standard because it arguably has a richer set of features and can be integrated more smoothly than the competition. And the Knative project continues to evolve to address businesses’ changing needs.  In the last year alone, the platform has seen many improvements, giving organizations looking to expand their use of Kubernetes through serverless new choices, new considerations and new opportunities.

Knative’s Serverless Boost

The open source Knative project, which originally sprung out of Google, extends the Kubernetes container orchestration platform with components for deploying, running and managing serverless applications. More than 80 organizations have contributed to Knative since it was initially released.

In the serverless model, organizations can build and run applications without having to worry about infrastructure and servers. And, through the use of containers as the format for serverless workloads, Knative enables developers to more easily build event-driven applications that can scale up and down (even to zero) based on demand.

Some organizations have shied away from serverless because of concerns with cloud vendor lock-in, performance and complexity, but these are issues that Knative was designed to address. Knative enables developers to build and run applications anywhere Kubernetes runs–whether on-premise or on any cloud—assuaging concerns about lock-in.

Knative Components

When it was first released, Knative comprised of three main components: Build (cloud-native source to container orchestration), Serving (scale to zero, request-driven compute model) and Eventing (universal subscription, delivery and management of events).

Since then, as the need for a complete continuous integration (CI) and delivery pipeline solutions became more and more clear, the Build module was spun out on its own. Build’s new incarnation is Tekton, a Kubernetes-native CI/CD pipeline.

Another important part of Knative is the kn command-line interface (CLI), which exposes commands for managing applications. With kn users can, for example, specify limits for CPU or memory consumption, as well as limits for scale, such as concurrency and number of instances per service. The CLI is currently focused on the Serving aspects of Knative, but work is in progress to enable use cases for the Eventing module, as well.

Performance Concerns

One of the biggest benefits of serverless and FaaS is the ability to spin application components up and down as needed, without having to worry about the server part of the equation. Of course, cold starts (the time it takes to create new container instance for the application when starting from zero) carry overhead, but in the year since Knative was first released, the impact of getting from “zero to one” has improved significantly.

There have also been several important changes on the Eventing side, including the introduction of Broker objects, which serve as a central hub for messages. The Broker object enables communication not only between one event source and one service, but also among many event types coming from different sources, and sink those events to applications running in a given namespace. The ability to mix and match events from multiple sources is a key aspect of hybrid cloud deployments. The applications running in that namespace can apply filters indicating they are interested in one or more of the event types, increasing efficiency and improving performance.

Swing to Serverless

In addition to these and many other improvements to Knative, there are a number of factors moving the needle on serverless. For one thing, many companies are well along their journey to microservices. Savvy organizations understand that serverless and FaaS don’t replace any investments online; rather, the technologies enable fuller exploitation of those resources.

In addition, when serverless first started gaining popularity a few years ago, we were mostly talking about the use of functions for small, very specific use cases. Now, any application that can (or, more to the point, should) scale up, down or to zero fits into the serverless model. We have a standard format to ship those applications—containers—but now organizations can leverage the same serverless model they’ve used for functions for any containerized workload.

As we move forward, individuals and companies contributing to the Knative project will continue to improve not only the upstream code. We’ll seek to improve the integration of Knative into PaaS and other platforms enterprises depend on, expand into common enterprise event sources and general purpose workloads, to make it suitable for wider adoption among companies looking forward to joining the Serverless movement.

To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon NA, November 18-21 in San Diego.