News

Hammerspace introduces high-performance NAS architecture to accelerate data pipelines

Data
0

Hammerspace today introduced the high-performance NAS architecture needed to address the requirements of broad-based enterprise AI, machine learning and deep learning (AI/ML/DL) initiatives and the widespread rise of GPU computing both on-premises and in the cloud. This new category of storage architecture – Hyperscale NAS – is built on the tenants required for large language model (LLM) training and provides the speed to efficiently power GPU clusters of any size for GenAI, rendering and enterprise high-performance computing.

A Hyperscale NAS architecture provides the best architecture for training effective models, speeding time-to-market and time-to-insight, and ultimately deriving business value from data.

“Enterprises pursuing AI initiatives will encounter challenges with their existing IT infrastructure in terms of the tradeoffs between speed, scale, security and simplicity,” said David Flynn, Hammerspace Founder and CEO. “These organizations require the performance and cost-effective scale of HPC parallel file systems and must meet enterprise requirements for ease of use and data security. Hyperscale NAS is a fundamentally different NAS architecture that allows organizations to use the best of HPC technology without compromising enterprise standards.”

The Hammerspace Hyperscale NAS architecture is ideal for both hyperscalers and enterprises as it does not require proprietary client software, efficiently scales to meet the demands of any number of GPUs during training and inference, uses existing Ethernet or InfiniBand networks, existing commodity or third-party storage infrastructure, and has a complete set of data services to meet compliance, security and data governance requirements.

Hammerspace also announced that its Hyperscale NAS is now available with NVIDIA GPUDirect Storage support, enabling customers to make any storage GPUDirect Storage to accelerate AI and deep learning pipelines.