AI/MLCloud Native ComputingDevelopersDevOpsNews

Hammerspace Announces Reference Architecture For LLM Training


Hammerspace has announced the release of the data architecture being used for training inference for Large Language Models (LLMs) within hyperscale environments. This architecture enables artificial intelligence (AI) technologists to design a unified data architecture that delivers the performance of a super computing-class parallel file system coupled with the ease of application and research access to standard NFS.

Hammerspace’s announcement unveils the proven architecture delivering the performance, ease of deployment, and standards-based software and hardware support required to meet the unique requirements of LLM data pipelines and data storage.

Hammerspace Ultra High-Performance File System: Hammerspace unifies the entire data pipeline into a single, parallel global file system that integrates existing infrastructure and data with new datasets and resources as they are added. The parallel file system architecture is critical for training AI as countless processes or nodes need to access the same data simultaneously.

Hammerspace Standards-Based Software Approach: The Hammerspace parallel file system client is an NFS4.2 client built into Linux, leveraging Hammerspace’s contribution of FlexFiles into the Linux distribution. This approach enables existing standard Linux client servers to achieve direct, high-performance access to data via Hammerspace’s software.

Hammerspace on Commodity Hardware: Hammerspace provides a software-defined data platform compatible with any standards-based hardware such as white box Linux servers, Open Compute Project (OCP) hardware, Supermicro, etc. This allows organizations to better leverage their existing hardware investment and benefit from cost-effective infrastructure at scale.

Hammerspace Streamlined Data Pipelines: The Hammerspace architecture creates a unified, high-performance global data environment that provides concurrent and continuous execution of all phases of LLM training and inference workloads. Hammerspace is unique in its ability to break down data silos, seamlessly accessing training data scattered across diverse data center and cloud storage systems from any vendor or location. By leveraging training data wherever it might be stored, Hammerspace streamlines AI workloads by minimizing the need to copy and move files into a consolidated new repository.

Hammerspace High-Speed Data Path:
Hammerspace reduces network transmissions and data hops at every point possible within the data path. This approach ensures near 100 percent utilization of the available infrastructure while delivering a streamlined high-bandwidth, low-latency data path between applications, compute, and data storage nodes.

Hammerspace Fault-Tolerant Design: LLM environments are massive, complex systems with extensive power and infrastructure. These AI systems often rely on continuously updating models based on new data. Hammerspace is capable of operating at peak performance through a system outage, allowing AI technologies to focus less on recovery from power, network, or system failures and more on persistence through those failures.

Hammerspace Objective-Based Data Placement: Hammerspace software decouples the file system layer from the storage layer, enabling independent scaling of I/O and IOPS at the data layer. Extremely high-performance NVMe storage can co-exist with lower cost, lower performing, and geographically distributed storage tiers – including the cloud – in a global data environment. Data orchestration between tiers and/or locations is controlled transparently as a background operation based on objective-based policies.

Integrated machine learning (ML) capabilities within the Hammerspace architecture will begin to place related data sets in high-performance, local NVMe storage when the first file from the data set is accessed.