0

 

With the introduction of smart phones and we’ve all gotten used to carrying computers in our pockets, has come the explosion of data.  Just a few years ago we were in awe of a 1 terabyte (TB) hard drive, now software companies are casually talking about petabytes. Next up exabytes, or one quintillion bytes.  It’s all cool until you want to find some useful information in that storm of data.

Users need to see value from these massive data sets (e.g., at what point to customers drop out of the purchase process?). With event-driven architectures, engineers also need to capture data about data so they can figure out what’s going on internally in their systems as the data flows from one API to the next, out to the edge and back, at speeds previously thought not possible.

“This explosion,” said Alan Chen, Principal Product Manager at Elastic, “comes strong on many fronts with massive data volumes coming in at extreme velocity, along with a constantly growing variety of data sources some of which exhibit tricky schema drift.”

This need to manage data at scale saw a rise in APIs, each with their specific task, and building systems with API building blocks, sometimes numbering in the thousands, gave rise to Software as a Service (SaaS), which in turn is driving the need for simplified integration.  In order to leverage data at scale, these SaaS bundles need monitoring. This in turn created the need for Integration platform as a service (iPaaS), which is basically a set of APIs and SaaS applications that are deployed in different environments, integrating on-premises applications and data with cloud applications and data.

“The Elastic Stack has made it simple to ingest, search, analyze, and visualize data. The distributed nature of Elastic makes it easy to scale-out using commodity hardware as your data grows. The tight integration between components that ingest, search, analyze, and visualize data delivers an end-to-end data pipeline and makes it easy to derive valuable insights from your data,” said Radhesh Menon, CMO of Robin Systems.

Which is where Elastic comes in.

The hybrid integration platform started from an open source project back in 2009, and converted to a business in 2013. Elastic set out to solve those problems and is evolving as the burgeoning world of  microservices by offering an expanding set of APIs.

Elasticsearch is a fit for operational and security analytics use cases where streaming data is key.  Data not only needs to be ingested, but indexed and made available in milliseconds. It follows a schema-on-write model which provides low millisecond query-time latency even across sophisticated aggregations and analytics.

Clean data, Chen stressed, is fundamental for analytics. “Enriched clean data drives even stronger analytics by supporting data source correlation downstream.”

Over the last two years, Elastic has been building modules enabling turnkey ingest-to-visualization experiences for popular logs and metrics data, gathering from servers, databases, queues, containers, and other parts of the stack to augment their suite of products including Elasticsearch, Beats, Logstash, and Kibana.  

“ROBIN Hyper-converged Kubernetes platform delivers a production-ready solution for the Elastic Stack by extending Kubernetes with built-in storage, networking, and application management. ROBIN automates the provisioning and management of Elastic clusters so that you can deliver an “as-a-service” experience with 1-click simplicity to your DevOps teams, BI analysts, and Data Scientists,” explained Menon.

Beats is their suite of agents that collect logs, metrics, wire, and security data, collecting data from thousands or tens of thousands of servers. Beats streams data over to Logstash, a collection of streaming analytics engines that transformation and facilitate the persistence of data in microbatches. Kibana is the all-seeing dashboard.

“Our ingest story is constantly evolving,” said Chen, “and we’ve been making significant product innovations to make our users’ lives easier from DevOps to data analysis.”  

ROBIN provides self-service provisioning and management capabilities to Elastic users, significantly improving their productivity. ROBIN has automated the end-to-end cluster provisioning process for the Elastic Stack, including custom stacks with different versions and combinations of Elasticsearch, Logstash, Kibana, Beats, and Kafka. The entire provisioning process takes only a few minutes. ROBIN provides authentication, access control, and encryption (both in motion and at rest) to secure your ELK deployment. Rack-aware placement rules for Master and Data Nodes ensures your HA setup is production-ready.

To that end, they recently introduced the Infrastructure UI which offers an out-of-the-box, end-to-end monitoring, so users can watch their entire infrastructure deployment across host endpoints, Kubernetes pods, and Docker containers.

Elastic users can turbocharge their productivity by adopting Kubernetes as the primary platform to run Elastic. Kubernetes brings the agility to deploy/decommission clusters as needed, making more efficient use of hardware resources, as well as developer’s time,” concluded Menon.

Swapnil Bhartiya contributed to the story. 

By TC Currie

You may also like