Cloud Native ComputingContributory BlogsDevOps

3 Tips From A CTO To Manage The Complexity Of Microservices

complexity
0

Allied Market Research forecasts the global microservices architecture market to grow to over $8 billion by 2026, with a CAGR of 18.6% from 2019 to 2026. This growth is fueled by many factors including continuous transformation demands, the proliferation of connected devices and the adoption of cloud-based products.

With cloud-native adoption only expected to grow, the decomposition of monolithic architectures is now a requirement for businesses transforming into technology companies. Delivering software fast and in a reliable, efficient manner requires agility, DevOps culture along with CI/CD best practices and the undertaking of microservices architectures to reap the benefits of loosely coupled services. However, as the number of microservices in development organizations increases, so does the complexity. For engineering leaders, tracking those microservices and knowing which ones belong to which product can become complicated.

For example, what if an organization has hundreds or even thousands of undocumented (or manually documented) microservices that run via the pipeline or deployed in Kubernetes? Is the organization even able to document the respective teams (i.e., owners) as well as dependencies and versions?

Suppose you answered “no” to the questions above or are simply at the early stages of your microservices journey, I will share insights that will ease your own microservices architectures. These insights center on three key themes for any technology leader to consider when driving the adoption and scale of microservices: create shared visibility, establish a baseline for allocating resources efficiently, and reduce risk arising from deployments in your development organization.

Create shared visibility into a microservice landscape

A lot of information and disconnected data points are likely present within most organizations but as an engineering leader, you need to find a way to bring them together. When decoupling the monolith, one of the most important considerations is how to enable a baseline understanding across the organization of what microservices exist, who owns them, what team(s) they belong to, who to contact, etc. As an organization grows, and the number of microservices increases, engineering teams must help the organization visualize their own microservice landscape with details on ownership, dependencies, open-source libraries and business context.

Manual documentation does not scale and is a tedious effort with overhead that can be easily automated given the right tooling. Microservices’ discovery and documentation can be automated to power configurable reports and dashboards and drive autonomy and decision-making. These capabilities are essential to long-term engineering success, and these insights can also help measure development KPIs. Ideally, you can not only document but also aggregate data and surface relevant development team KPIs. Plus, the value of never missing a deployment again and surfacing orphaned microservices can create significant competitive advantages for the business.

Allocate resources with a microservices catalog

Based on these factors below, engineering leaders can make informed decisions on resource allocation for reliability improvement vs. when to invest in new features and products. As a CTO, I always want to monitor the as-is state, but must also consider where to change, invest, and allocate development resources differently over time.

As a company adopts a microservices architecture, a key enabler for scalability is to create autonomy and provide colleagues with access to relevant information independently via self-services. If a team member can immediately see who owns a web service or an interface and can contact that person directly, time wasting can be avoided. This is particularly important for shortening the ramp-up and onboarding time for new developers. Onboarding comes back to the importance of cataloging to provide an initial understanding of the architecture and the ability to navigate the company without the tacit knowledge that only developers with long-term experience would have.

If an organization is adept at identifying unreliable services and their implications, teams can better understand their product service availability and shorten incident response times. The focus must be on providing a holistic overview of these fast-changing environments and attempting to sync deployment and dependency information from CI systems and Kubernetes to synchronize runtime and deployment information. This information should also be stored within a central repository and contextualized to user-defined requirements via reports and matrices.

Another benefit of creating a central microservice catalog is for CTOs and engineering leadership to easily monitor KPIs such as deployment frequency, mean time to resolution (MTTR) and failure rates of all products. If the catalog can present a clear line of sight into the ownership and performance of self-developed software, business stakeholders are able to share and discuss SLAs and SLOs and show their development over time. A relevant consideration here is to make API integration as simple as possible so that the automated intake of SLIs (e.g., from systems like Pingdom) is available.

Reduce risk wherever possible

Engineering leaders must build trust with their development teams. As described previously, a great way to do this is by providing access to automated microservice documentation and enabling ways to facilitate knowledge exchange and self-education without creating overhead for your developers.

With an “inventorization” of microservices, the entire organization can map libraries and versions to your microservices to understand the full picture of your microservice landscape. If you can extract all of the open-source libraries and map them to the corresponding open-source licenses, your organization can quickly localize who to go to in terms of an open-source license management inquiry.

Organizations need to prepare for a reactive approach for governance by enabling real-time insights on what is happening in the development environments and directly spot violations. This helps anyone in the IT organization actively find whether your license is approved or not. If it’s a new license, then steps can be taken such as reaching out to legal for approval. Or if there are rejected licenses, for example, then the individual knows that the library they’re trying to use is not a good choice. Copyleft is a critical topic as it leads to an important compliance issue. For software development organizations, copyleft licenses aren’t usable. Therefore, from an auditing and due diligence perspective, having all licenses mapped to workflows and being able to show everything in place is beneficial for legal teams.

Another important consideration around open source is vulnerability analysis and understanding what part of the libraries that you’re using is affected. In every organization, potentially thousands of libraries could be impacted by vulnerabilities. There are effective tools available to detect that information, but the hard part is whom to route information to internally. In the future, the ability to group these findings and send them to different teams would provide another key area of governance that needs to happen in the development organization.

Conclusion

Engineering teams must always know what’s running where, who deployed it, and how it supports your business. Your ability to build a central microservice catalog will effectively increase developer productivity and ensure software reliability. It will also help create the transparency needed to act quickly, support decision makers to allocate resources efficiently, and ultimately reduce the headaches most commonly associated with scaling microservices.

Join the cloud native community at KubeCon + CloudNativeCon Europe 2021 – Virtual from May 4-7 to further the education and advancement of cloud native computing.