Author: Laurent Gil (LinkedIn)
Bio: Co-founder and CPO at CAST AI, Laurent Gil is responsible for product and business development. He was Co-Founder and Chief Product and Business Officer at Zenedge (Cybersecurity), acquired by Oracle in 2018.

With an economic downturn on the horizon and budgets getting tighter, there is a lot of talk in the business world about ditching the public cloud and moving back to on-premises. Basecamp’s decision to leave the cloud to save money got a lot of attention recently, inspiring leaders to reassess their cloud ROI and consider the potential benefits of cloud repatriation.

While cloud computing delivers on its promise early on, once your company scales, cloud costs start putting considerable strain on profit margins.

Organizations are seeing cloud costs spin out of control for many reasons. Some applications are a poor fit for the public cloud. Lifting and shifting apps to the cloud without replatforming them is also bound to increase costs in the long run.

Many companies that have embraced the cloud lack proper cloud cost management capabilities or are relatively early in their FinOps adoption journey, relying on cost reporting and manual configuration.

Should mature companies just move to on-prem?

Faced with growing cloud expenses and uncertainty over whether cloud optimization initiatives will eventually pay off, it makes sense for organizations to consider repatriation as a viable cost-cutting option.

After all, ordering server capacity in the on-prem scenario isn’t something engineers can do on their own, without involving other teams. And setting up that infrastructure may take weeks, which prevents any surprise spending.

While on-premises sounds like a dream come true to finance teams, it’s a nightmare for anyone looking to innovate faster than competitors. And how else can organizations survive with the recession around the corner?

Slower development and innovation pace is just one of many arguments against moving back to the world of data centers:

  • Massive capital investment – building your own data center involves immense resources – not to mention the hefty upfront fee, also valid if you decide to use a colocation center. Even greater is the cost of workers who constantly tinker with servers for storage or compute instead of focusing on making higher-value cloud services work.
  • Predicting demand is challenging – when on-prem, you need to plan well and forecast your future requirements. Bulking up on servers is risky when future demand is a black box.
  • Supply chain issues – Gartner analyst Lydia Leong pointed out the opposite trend of enterprises moving to the cloud because data supply chain issues prevent on-prem footprint expansion.

In an uncertain economic environment, why would an organization want to lock itself into inelastic hardware purchases and software licenses?

The core benefits of the cloud – scalability and flexibility – enable teams to scale applications up or down in line with rapidly changing demand, which is a much better model for organizations hoping to optimize costs in an unpredictable economy.

How to deal with the long-term cost implications of the cloud

Infrastructure spending should become a first-class metric for leaders entering the current tumultuous market conditions.

But tracking metrics via cloud cost monitoring, reporting, and allocation solutions such as CloudHealth by VMware or Apptio’s Cloudability isn’t enough. Organizations need to pay as much attention to their teams’ ability to provision just enough capacity for applications to run smoothly and cost-efficiently.

Companies end up wasting 32% of their total cloud spend, on average.

This makes cloud cost optimization a low-hanging fruit. Equipping teams with optimization solutions lets you stay where you are, but for a fraction of the cost.

3 steps to eliminate cloud overprovisioning

1. Rightsize cloud resources

Rightsizing is the process of determining the best combination of cloud resources to minimize waste while balancing risk and cost.

Cloud resources are scalable and available on demand. The challenge here is helping engineers understand their application requirements and picking resources of the correct size to provide an ideal customer experience.

Cost reporting solutions often show rightsizing recommendations to be implemented by engineers manually. To remove this manual effort and ensure that applications always run where they should – even as demand fluctuates – DevOps leaders are turning to automation solutions that represent a fast-evolving segment of the cloud optimization market.

2. Remove idle resources

Organizations often end up paying a big check for resources that don’t bring them any value. Idle virtual machine instances are a great example of this. It might be leftovers from an experiment a team forgot to shut down or a shadow IT project someone in the organization launched without the knowledge of the IT department.

Organizations can use solutions that scan the infrastructure in search of idle resources. Ideally, this will help DevOps identify candidates for shutdown and trace them to a specific project or team, provided that the company implements a thorough tagging/allocation strategy.

3. Autoscale resources to match demand in real-time

This practice has a direct impact on cloud computing costs since it refers to the procedures engineers put in place to guarantee that applications are available at peak demand and customers are served.

For years, teams have taken it for granted that no cloud environment is fully utilized 100% of the time. But automation can change that.

Autoscaling lets engineers fulfill demand while minimizing expenses in two ways:

  • Horizontal autoscaling: scaling out or in instances of a resource
  • Vertical autoscaling: scaling up or down within a resource’s capacity

Teams need to know what their needs are in real time if they want to use autoscaling-based continuous capacity management. This is a great use case for automation. When done right, autoscaling will enhance both availability and cost management.

Automated cloud optimization is the future

Why should organizations continue relying on manual cloud cost reporting and management when automation is already addressing so many DevOps challenges?

Automation opens the door to building a cost optimization strategy that goes beyond conventional cloud cost monitoring and reporting, making a significant impact on the time and effort it takes to manage cloud costs over the whole application lifetime.

You may also like