Cloud Native ComputingContributory Expert VoicesDevelopersDevOpsKubernetesPlatform Engineers

Container Carcinization, or: How Developer Enablement Drives Effective Platform Evolution


Author: Nick Anderegg, Developer Advocate at
Bio: Nick is a Developer Advocate at He’s passionate about bridging the gap between developer tooling, industry best practices, and the many ways people actually do their work.

Nature abhors a vacuum of crabs. Nature has such a tendency to create crabs that we have a word for a non-crab crustacean species evolving to have the form of a crab: carcinization. Not every crustacean eventually evolves the traits of a crab, of course. Plenty of crustaceans have entirely different specialized features; some have even cycled between carcinization and decarcinization multiple times over their evolutionary history.

In general, organisms better adapted to their environment are more likely to survive and thrive, so it’s no surprise when similar organisms in similar environments accumulate similar traits over time.

Now, you may be wondering why an article about flexible development workflows is rambling about decapods, and that’s a fair question. Really, carcinization is just a useful framing device for thinking about the evolution of your containerized application infrastructure.

Let’s compare the characteristics of crustaceans and containers that we might commonly find after each has gone through many rounds of evolution and optimization:

Carcinized Crustaceans “Carcinized” Container Architectures
Body Segments of the main body fuse together to form a structure with two important characteristics:
●      Flat
●      Wide
Containers are optimized to attempt to meet two goals simultaneously:
●      Flat, with as few layers as possible
●      Wide, able to be deployed as the runtime for any application
Shell The shell is usually a fundamental part of the body, unlike some other crustaceans which have evolved to change shells as their needs change. The shell is usually a fundamental part of the container, acting as the sole entrypoint for the full set of OS services being provided to the application.
Notable Traits Adopt traits that make them stand out, but improve how suited they are to the environment (e.g. porcelain crabs have large claws for defending territory). Adopt traits that make them stand out, but don’t necessarily improve how suited they are to the environment (e.g. using dynamically-allocated persistent volumes for stateless services to store ephemeral data).
Identification Shouldn’t be confused with crabs. Shouldn’t be confused with crabs.

Make no mistake, the way we gradually evolve development processes and application environments over time is different from natural selection in a fundamental way: when it comes to evolving your team’s workflow, you can make top-down choices to produce a development process adapted to your needs. The most common patterns aren’t necessarily the ones that will best serve your specific needs.

So how do you evolve your software architecture in a way that meets your changing needs without accumulating unnecessary complexity in the process?

Delivery-Aligned Software Architecture

In the world of cloud-native applications, distinguishing between application and infrastructure requirements can be difficult. When everything is an abstraction, where does one end and the next begin? By adopting a workflow-first approach to evolving your containerized software architecture, your organization can better align its development lifecycle with the separation of concerns that naturally exists between your application and its infrastructure.

Call it platform engineering or DevOps or plain ol’ business process management—whatever flavor you prefer, when developer enablement is the primary force shaping your decision making process, your application and its infrastructure can remain closely aligned with evolving organizational needs.

When Constraints Become an Advantage

Tools for managing containerized application infrastructure, like Kubernetes, excel at allocating a large pool of resources among any number of logical groups, representing a fundamental paradigm shift in how computational resources are managed and allocated (and illustrating one way new constraints can increase flexibility).

Although the goal of using such tools is typically to build a mature, adaptable platform that provides scaffolding for building consistent, reliable development processes, actually evolving a platform to a high level of maturity requires effort and thoughtful planning. How do you ensure that you’re making the right decisions in every iteration of your platform? How do you identify the decisions that will lead to the best outcome?

Flip the architectural formula on its head and take a workflow-first approach to evolving your platform to maturity. Ask yourself “how will this change affect the operation of the platform?”

To put it another way, instead of designing a platform around an idealized model of your software architecture, allow your software architecture to emerge as the natural result of designing a platform around an (idealized) model of how your team delivers software: a delivery-aligned software architecture.

After all, the best software architecture in the world is useless if it’s impossible to deliver the software to your users.

When Flexibility Becomes a Liability

Although every project will have interdependencies between an application and its underlying infrastructure, the requirements that shape these layers are rarely one and the same; the evolutionary forces pushing each layer forward can differ significantly. Consider, for a moment, the obstacles an engineering org might face when scaling an ecommerce application with rapidly growing traffic, as compared to challenges encountered by a team adding support for screen recording in a niche SaaS app.

In the first scenario, the requirement that the application be able to scale-out horizontally places constraints on the application architecture itself. However, the architecture of the infrastructure supporting the application can remain stable throughout its lifespan, with increases in traffic accommodated through parallel deployments onto that infrastructure.

In the second scenario, the application may only require a small change (at least, from a product perspective) to support the new screen recording user experience. Because of the significant resource requirements inherent to hosting and distributing video content, significant changes may need to be made to the application’s backend and infrastructure, even if the new feature is only expected to see limited usage.

There are a number of ways to construct an adequate architecture for either of these situations, but not all of them will lead to an ideal outcome.

Adapting to New Environments

In general, tools that can be better adapted to their operational environments are more likely to survive and thrive, so it’s no surprise when dissimilar organizations with dissimilar needs accumulate similar tools over time.

Consider how Kubernetes is being used to manage the computing infrastructure of restaurant chains or automotive head units. The physical infrastructure is the same as it always was (inexpensive, consumer-grade computers in restaurants, low-power embedded computers in vehicles, etc.) and the applications remain exactly what you’d expect to find in either situation..

Although these are wildly different products with few common challenges between them, in both of these examples of mature internal platforms, each architecture is informed by the operational needs of each company.

Adapting to External Constraints

When it comes to adapting to external constraints, the use of Kubernetes in restaurant operations is an excellent example of how a delivery-aligned software architecture emerges from a workflow-first approach to evolving your development processes.

In this case, a common orchestration layer is separately managing the distinct needs of edge computing in restaurants and centralized applications in the cloud. Rather than treating their platform as part of a single, distributed system with resources spread across every restaurant, however, Kubernetes is used to orchestrate deployments to the distinct clusters located in each store.

Of course, when software delivery processes are efficient, reliable, and automated, deploying updates to thousands of separate clusters isn’t much different from deploying updates to one environment.


By starting with first principles and focusing on your team’s internal developer experience, you can build a delivery-aligned software architecture and avoid deploying carcinized containers: that is, containers that are nothing more than large, flat shells, indistinguishable from one another, with big features that aren’t suited to every environment.

Join us at KubeCon + CloudNativeCon North America this November 6 – 9 in Chicago for more on Kubernetes and the cloud-native ecosystem.