Contributory Blogs

Efficiency: The overlooked innovation that defines real progress

green technology
0

Author: Roman Khavronenko, Co-Founder, VictoriaMetrics
Bio: Roman is a software engineer with experience in distributed systems, monitoring and high-performance services. The idea to create VictoriaMetrics was formed when Roman and Aliaksandr Valialkin, Co-Founder, were working in the same company. Prior to VictoriaMetrics, Roman has worked at Cloudflare in London, but then decided to join the VictoriaMetrics team. He now lives in Poland with his wife and son.


As technology advances, we often assume that each new generation of hardware will be faster and more powerful than the last. However, while today’s processor manufacturers are openly discussing the limitations of current chip fabrication methods, the reality is that most users are far from able to afford hardware that can hit these performance ceilings. Similarly, topography limitations have also been discussed for decades. The consensus is that future innovations will require unique designs. There will come a point in the next decade when advances in materials science are needed before hardware can improve at the rate we have become used to. While this is still many years away, it doesn’t mean prioritizing software efficiency today doesn’t have advantages. There are tangible benefits for firms that tackle inefficiencies rather than rely on future hardware developments to compensate for bloated code. Whilst organizations can still afford the hardware needed to do the work, there is no incentive to be more efficient. Hardware is cheaper than engineers that can optimize software. However, firms that focus on efficiency today will be a step ahead when that is no longer the case.

Compress data, compress costs

Compression algorithms play a vital role which will only become more important as costs rise. The less data that needs to move across a network, the more money can be saved. Better data compression should be the first step for developers pursuing efficiency, because it trickles down to the rest of the system. Even on the same hardware, computing smaller data packets is far quicker than working with uncompressed data. This includes reduced data transit across or between servers (moving data between providers can be exceptionally costly), and less disk usage to store data, which takes a large chunk out of cloud costs. That said, part of efficiency is having a balanced approach to compression. Data can be overcompressed, and any saved compute resources are then used up in decompression, leaving you in the same place you began.

Edge computing…it’s still relevant

Edge computing is a dated term, but moving processing out of the data center can have an outsized impact on costs. In monitoring, telemetry aggregation can save on cloud expenses by handling data processing at the point of generation. The net compute cost is not actually lower, just located on different systems, the smaller data sets can be parsed as effectively by IoT devices as a larger server that is aggregating telemetry inputs from thousands to millions of devices.

Simplicity is efficiency

When designing a scalable distributed system, simplicity should come first because it facilitates future scalability. Simplicity for system components means focusing on a single purpose and making it perform exceptionally well, without relying on fine tuning, other components or dependencies. Decoupling then enables the scaling of individual components because there is a reduced need for a system to be in equilibrium for optimal function. Otherwise, scaling any one component can ripple throughout a system. For instance, scaling the serving of read queries in a database might require scaling the caching layer, which necessitates scaling another component, and so forth. The key to achieving cost efficiency lies in simplicity and transparency. The fewer hidden mechanisms and components involved, the greater the efficiency will be.

Efficiency is green

Previously, energy waste and e-waste costs were often externalized. Now, regulations and consumer awareness mean these issues have real business costs, whether from taxes or lost revenue. Efficiency controls emissions by reducing resource use, insulating businesses from these expenses. Efficient software extends the lifespan of hardware. Most of the need to constantly upgrade hardware comes from continuously increasing compute costs. This is like cars constantly having worse fuel efficiency, when you think of it in those terms, the inherent unsustainability is obvious.

If it can be measured, it can be managed

A good way of grappling with spiraling costs is to measure and track them at the point of generation. Open-source monitoring platforms provide businesses with the flexibility to tailor their monitoring systems to specific sustainability metrics. This enables organizations to collect and analyze data from various sources, such as energy meters, sensors, and environmental monitoring devices, whilst remaining cost-efficient. Comprehensive data collection allows businesses to gain deep insights into their energy consumption patterns and identify areas where efficiency improvements can be made – and as open-source is very popular with tech folk, the skills needed are already likely there.

Businesses can also proactively monitor their infrastructure, applications, and operations for potential inefficiencies or wasteful practices. Real-time monitoring and alerting mechanisms enable swift detection and remediation of issues that could lead to excessive energy consumption or environmental impact. Speed is of the essence when it comes to climate change, so the quicker problems can be identified, the better. By identifying bottlenecks, optimizing resource allocation, and streamlining operations, companies can minimize their ecological footprint and maximize energy efficiency.

Licenses are a cost most can do without

Open-source solutions themselves are a great way to reduce costs. Many solutions are made up of free and open source software (FOSS) components which are free by default, or have lower licensing costs compared to proprietary software. This reduces upfront expenses, allowing firms to allocate their resources more efficiently. Most open-source solutions can be deployed on existing hardware infrastructures, eliminating the need for additional hardware investments.

Many industries have compliance and regulatory requirements that are increasingly stringent. Open-source solutions often provide customisable compliance features, ensuring businesses can meet these requirements without breaking the bank. Open source software can also be customized to a greater degree than closed source. With open source solutions the code can be manipulated without violating an end user license agreement, giving users the flexibility for specific deployments outside of what a single vendor will offer.

Ultimately, developers have a growing responsibility to build sustainable, resource-efficient products. As computing matures, better hardware cannot be assumed or promised. At some point, efficiency may be the only “innovation” left.

###

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe in Paris from March 19-22.