On the eve of KubeCon 2021, it’s a great opportunity to shine a light on some of the trends and philosophical principles that have guided our community’s progress in recent months. The advent of microservices has afforded us tremendous agility in how we build and deploy cloud native software, and the open-source community’s focus on continuous improvement and innovation is manifesting in some really interesting ways.
Argo CD has emerged as the fastest growing open-source tool today, and the largest and most popular GitOps tool available – and this is a direct reflection of the rising importance of GitOps disciplines. Where previously our workflows were plagued with guesswork and frustration, GitOps exists to foster a faster, safer, and more scalable approach to continuous delivery (CD).
GitOps’ rising popularity speaks to the massive horizontal scaling challenges that enterprises continue to grapple with when deploying and managing thousands of microservices on thousands of clusters. When scaling to hundreds of thousands of workloads, how do you keep track of them, how do you guarantee uptime, and how do you create and enforce policy addressing how the software is released, maintained and updated?
AUTOMATIC EFFICIENCY AND AGILITY
With speed and scale always top of mind, seasoned DevOps teams can confidently deploy code to production hundreds of times daily. This agility can be their competitive advantage when it isn’t impeded by workflow frictions and complexity. GitOps answers this challenge, drawing on decades of DevOps knowledge and best practices. And while the ideas and disciplines that underpin GitOps aren’t altogether new, they’ve been refined and standardized in a manner that makes them much more accessible and intuitive.
Your engineering team’s innate desire to deploy more often must be counterbalanced with your operations’ team need for uptime assurance. This can be reconciled via GitOps with improved build, test, deploy processes spanning continuous integration (CI) and CD that reveal where regressions are coming from and trigger efficient, instantaneous rollbacks. This in turn helps eliminate downtime and enable seamless disaster recovery and protection in the event of hacks, ransomware schemes and other security threats.
With GitOps best practices, the entire code delivery process is controlled via Git, with an ‘infrastructure as code’ model and defined workflow whereby code changes are managed in an automated process. Deployments, test and rollback are managed through git flow with a closed reconciliation loop to eliminate guesswork and allow teams delivering software to focus their energy on more pressing and/or productive matters. Operations are performed by pull request whereby a code change is made, checked in to git and version control, and the system essentially handles the rest. There’s no more risk of a bad manual configuration being deployed to a live terminal, or the downtime consequences that can follow.
GitOps eliminates the need for cumbersome hand rolled deployments and the confusion and complexity inherent to them. Anyone can freestyle a scheme for deploying software, but it’s considerably more advantageous to employ an opiniated GitOps pattern that tells you exactly how your repo should be structured, how to handle test, how to promote changes across environments, and more. That’s invaluable because, among other things, it means the time that your team would otherwise spend on research and design phases can be reduced dramatically.
Key to this approach is the idea that running infrastructure isn’t practical in an imperative-driven operations model whereby a list of tasks must be completed in a rigidly defined sequence in order to function smoothly. In this model, a scripted progression of events must occur, because if anything at all goes wrong in the sequence, the rest of the process collapses because it’s dependent on the preceding step.
GitOps eliminates this bottleneck and potential point of failure via a declarative approach that regards pipelines as policies to be implemented, not a sequence of commands. This declarative approach defines how the infrastructure should work and it automatically arrives at that state, ensuring that it’s endlessly adaptable and scalable when changes arise.
The declarative approach has served us well with Kubernetes whereby we simply tell it that we want X,Y,Z results, and it figures out how to do it – automatically and independently. It affords us a turnkey way to say, “Here’s my service and here’s how I want it built and deployed.” Kubernetes will do the rest, and it will never stop trying until the policy is satisfied.
COMMITMENT TO THE COMMUNITY
Codefresh is working collaboratively with AWS, Azure, Weaveworks, Github and many others at CNCF in continuance of the Kubernetes project to build out the GitOps open technology standard. Meanwhile the open-source community – Codefresh included – is contributing to and adopting Argo CD as it continues to soar in popularity and value.
As one of its lead maintainers, Codefresh is dedicated to advancing the Argo project forward and unlocking its full potential for efficient, fully featured continuous delivery. Our major time and engineering resource investments in Argo and GitOps go hand in hand, and we’ll be making major news at KubeCon as we proudly announce a Codefresh company transformation that’s squarely aligned to supporting these important and exciting initiatives.
As Chief Open Source Officer at Codefresh – a role that I’m humbled to hold and strongly embrace – it will be an honor to extend and cement our steadfast commitment to open source with a major ongoing contribution. In so doing, we’ll take the very best of what we’ve learned in the five years since our inception, and freely and openly share it with the community where it can continue to flourish.