Cybersecurity is, at its essence, about safeguarding systems. Its purpose is to keep systems online, whole, and their secrets undisclosed. When done well, it ensures that a service remains in place for as long as intended, is accessed only by those authorized to do so, and does not deviate from its intent and purpose unless its maintainers and caretakers explicitly make those changes.

With the advent of cloud came systems designed from the ground up to be cloud native: web-scale, consumer-facing systems that were both global and highly available. And as the architecture of systems evolved, software development evolved along with it in order to support those characteristics. Now, traffic and data patterns flow in all directions, the perimeter has shifted, app teams often find it easier to “route around” central IT services and go direct to the cloud. As a consequence, the overall security posture is harder to determine than ever before.

Whereas everything else with systems is now heavily automated—including infrastructure management and entire software lifecycles—because applied security previously assumed static behaviors and thus static configurations, it must now support both dynamism and elasticity. Not only are security requirements higher, but the stakes are, too, with an increased breadth of impact. As such, cybersecurity must move from protecting fragile systems to building robust and intrinsically secure systems that are improved by constant stress and change.

Developing and deploying secure software is a journey

Cloud native security is still in its infancy, so there are few individuals who understand it well. Indeed, security is already a complex domain; when coupled with all things cloud, the learning curve is long and steep. Those who are knowledgeable about cloud native security are subsequently in high demand, and stretched thin. At their core, general security practices are based on well-understood principles; it’s the lack of coordinated implementation and integration where the approach tends to fall short. Taking the same traditional atomistic approach has proven both inefficient and inadequate in modern development environments.

While the existing building blocks are distinct from one another and confer different benefits, when it comes to cloud native security, implementing any traditional security controls in isolation from policy, procedures, or techniques does not guarantee either safety or significantly better metrics when it comes to measuring cybersecurity risk.

Shifting left and applying zero trust

The emergence of both “expand left” and zero trust models addresses the need for security to be applied at a much more granular level, across the board. Shift left denotes reallocating security to take place earlier in the lifecycle; expanding left is about extending security to cover the entire lifecycle. But security must permeate all stages of the software development lifecycle, and realizing this in every model is a constant effort. It can be hard to appreciate the degree of innovation and breakthrough that lies behind open standards, which include:

  • Software Bills of Materials (SBOMs).
  • The Update Framework and Notary, for binary signing.
  • Supply chain logs, through in-toto.
  • Policy frameworks as Open Policy Agent.
  • Service identity with specifications like Secure Production Identity Framework For Everyone (SPIFFE) and its software implementation and attestation engine, SPIRE/
  • runtime security with Falco.
  • Transparency ledgers like Sigstore’s Rekor.

This is not an exclusive list; there is a lot of other open source software that makes realizing those models possible but that are incomplete when implemented as standalone solutions.

Designing and integrating disparate systems as one

Now, to improve on security from the past, you may progressively architect for state-of-the-art advances in security. When independent solutions and methods are assembled to work in conjunction with each other, this creates a robust and secure system. The tight integration of security methods and mechanisms dictates integrity, resiliency, and adaptability over time. The efficacy of cloud native security mechanisms is proportional to how tightly integrated these mechanisms and controls are, and how they are verified. For example: Is admission control enforced on the provenance of a binary? Also important is how well they interoperate with existing systems and tooling. Are the identities of the machines that build software strongly attested? Is the signing and verification framework integrated with a modern cloud native public key infrastructure?

Conceptually, taking a defense-in-depth approach also fits the bill. However, for defense to be truly in-depth, it must be automated all the way from the supply chain of software factories to the intrusion prevention and intrusion detection controls in production systems. Each layer builds on top of the other, and considers the outputs and inputs of those before and after it with skepticism. But with security as an automated function and readily available service of the underlying infrastructure, there is a faster, safer path to production—as there is for deployed applications at runtime once in production.

By looking at the hierarchical and cooperating building blocks of signing, scanning, verifying, issuing identities, and enforcing access control, we can start to move towards disparate systems that are orchestrated as a distributed “security operating system.” Distributed systems make deviations stand out and encumber persistent attacks, and as such are harder to attack than non-distributed systems.

APIs as the contract stakeholders can agree upon

With APIs as a contract that serializes governance, risk, and compliance requirements, practitioners are able to express requirements and objectives, and in turn define those as shared articulation, then program the automation that keeps assurances in place. Having APIs and the ability to describe infrastructure as code has enabled policy and security controls to also be expressed as code.

Mitigating the utility of exfiltrated credentials

The common denominator of many high-profile attacks in recent years that permitted entry into systems was exfiltrated credentials. When the software industry moved away from embedding or hard-coding any long-lived, pre-shared secrets, it was the first of many steps in the right direction. Indeed, ensuring that identities are derived by recognition as opposed to proof of possession is key, as is performing runtime attestations to “fingerprint” or “retina-scan” identity   and ensuring that the credentials that are subsequently issued have a limited lifetime, with durations of just a few hours or shorter, and are automatically rotated when expired.

The importance of strong verifiable identities

Mechanisms to identity software—credentials in the form of usernames and passwords, API keys, tokens, and certificates—pre-date cloud. That’s because if static identities fit well in dynamic multi-cloud environments, traditional security will be effective and ever-present.

A stumbling block practitioners encounter along the journey is an inability to define what to trust or not to trust without knowing where a component came from or how it got there in the first place. And even then, how can you be sure? As a workaround, the process of establishing secure, bi-directional communication between parts of the system is done by fiat, not by inherent trust. It is subsequently hard to decide whether an application call should be allowed based on the last caller and not the broader context of what initiated the call unless you develop a truly deep understanding of the system—namely its traffic and data flows—from end to end.

Service identities and service-to-service communications enable system interactions to be determined. Once you have established verified trust through a bedrock of identity, you will have a wealth of risk information about services, actions those services perform, and how those services relate to one another. The previously unidentified components, in turn, can be made the subjects of granular defense.

Collecting trusted metadata about artifacts and using metadata to make policy decisions

Having developed and surfaced additional information of system internals, a wealth of metadata about those system internals emerges. That metadata is as important as data itself, and therefore must be preserved, leveraged, and protected commensurately. With this additional “data about data,” your security and risk teams have concrete means to better describe through code your security, compliance, and regulatory objectives while at the same time leveraging automated reasoning technology to perform verification and audits.

Preparing for and ensuring progress along the journey

Because levels of security maturity and their respective threat models differ widely from organization to organization, there are no one-size-fits all recommendations. For each organization, team, and person, you have to take into account the existing investment and level of skill involved.

Cloud native security goes far beyond simply laying a solid foundation. In order to address it effectively, stakeholders must first unlearn old paradigms, restate their assumptions, and consider any problem without bias. Cloud native security is different from traditional systems security in almost the same way cloud native systems are different from traditional systems. Special focus and consideration must be paid to dynamism, API-driven software, ephemeral and immutable workloads, and platform-agnostic tooling that can span multiple clouds.

To paraphrase David Wheeler, the director of open source supply chain security at the Linux Foundation, among the many considerations you should make as you prepare to embark on your journey, start your software evaluation by looking for additional information as to who developed it and how. You will also want to begin using—as well as demanding that your suppliers use—SBOMs. You moreover want to employ verified reproducible builds, make use of cryptographic signature and integrity attestation, and increase your use of memory-safe/safe languages. And perhaps most importantly…work with others to help make things better!

Trusted company

“Which is more important,” asked Big Panda, “the journey or the destination?”

“The company.” said Tiny Dragon. (concept courtesy of James Norbury)

VMware Tanzu is sponsoring Cloud Native Security Day, (May 4th, 2021), which is organized by the CNCF Special Interest Group for Security (CNCF SIG-Security). The program committee created a curated schedule that combines best-of-breed open source technology of the parts-bin in cloud native security with discussion and knowledge sharing of use cases and how to best leverage solutions to address those use cases in real-world scenarios.

Join the cloud native community at KubeCon + CloudNativeCon Europe 2021 – Virtual from May 4-7 to further the education and advancement of cloud native computing.

You may also like