Cloud Native ComputingDevelopersDevOpsNews

Materialize’s New Distributed Streaming Database Now Available

0

Materialize has announced early availability of its distributed streaming database, which enables immediate, widespread adoption of real-time data for applications, business functions, and other data products.

Materialize’s PostgreSQL-compatible interface lets users leverage the tools they already use, with unsurpassed simplicity enabled by full ANSI SQL support. It allows developers and data teams to build customer-facing workflows, data engineers to build data applications, and analytics engineers to perform streaming analytics, leveraging integrations with powerful platforms like dbt. Materialize gives developers results that are always up-to-date, enabling them to quickly build automated, low-latency applications downstream.

New features include:

  • Availability as a fully-managed cloud-native software-as-a-service platform
  • Elastic storage (AWS S3), separated from compute increases scalability and availability while reducing costs
  • Strict-serializability eliminates stale data and enables strong consistency guarantees
  • Multi-way complex joins supports stream-to-stream, stream-to-table, table-to-table, and more, all in standard SQL
  • Horizontal scalability leverages Timely Dataflow to let users handle large, fast-scaling workloads
  • Active replication enables users to spin up multiple clusters with the same workload for high-availability
  • Workload isolation enables users to spin up multiple clusters with different workloads while still leveraging shared elastic-storage, enabling collaboration without worrying about interference

Using standard ANSI SQL and looking and acting like a Postgres database, Materialize, which is built atop Timely Dataflow and Differential Dataflow:

  • Incrementally maintains the results of SQL queries as materialized views, in-memory or on cloud storage, providing millisecond-level latency on complex transformations, joins, or aggregations.
  • Ingests data from multiple sources, including relational databases, event streams, and data lakes before transforming or joining data using the same complex SQL queries used with batch data warehouses.
  • Builds materialized views and incrementally updates the results of as source data changes, rather than computing the answer to a query from scratch every time like a traditional database. Users may either query the results for fast, high-concurrency reads, or subscribe to changes for pure event-driven architectures.