AI/MLCloud Native ComputingDevelopersNews

Tecton Adds Low-Latency Streaming Pipelines To Help Customers Build Real-Time ML Applications Faster


Enterprise feature store company Tecton has added low-latency streaming pipelines to its feature store so that organizations can quickly and reliably build real-time ML models.

With Tecton, data teams can build and deploy features using streaming data sources like Kafka or Kinesis in hours. Users only need to provide the data transformation logic using powerful Tecton primitives, and Tecton executes this logic in fully-managed operational data pipelines which can process and serve features in real-time.

Tecton also processes historical data to create training datasets and backfills that are consistent with the online data and eliminates training / serving skew. Time window aggregations – by far the most common feature type used in real-time ML applications – are supported out-of-the-box with an optimized implementation.

Data teams who are already using real-time ML can now build and deploy models faster, increase prediction accuracy and reduce the load on engineering teams. Data teams that are new to streaming can build a new class of real-time ML applications that require ultra-fresh feature values. Tecton said it simplifies the most difficult step in the transition to real-time ML – building and operating the streaming ML pipelines.