Guest: Paul Pindell (LinkedIn)
Organization: Linux Foundation (Twitter) | Project: Open Programmable Infrastructure (OPI) Project 
Show: Newsroom

In June 2022, the Linux Foundation announced the OPI Project to accelerate the adoption of data processing unit (DPU) and infrastructure processing unit (IPU) technologies. In this episode of TFiR: Newsroom, Paul Pindell, Outreach Working Group Chair – OPI Project, shares the status of the project one year later.

  • History: Two and a half years ago, F5, Red Hat, and IBM got together with the goal of creating a standardized framework that could be used to deploy, secure, and run infrastructure and applications on data processing units (DPUs) across all of the vendors in this space. A little over a year ago, the Open Programmable Infrastructure (OPI) became a Linux Foundation project.
  • The objective of the OPI project is to foster a community-driven, standards-based, open ecosystem for next-generation architectures and frameworks based on DPU/IPU-like technologies.
  • Project members: The premier members of the project are F5, ARM, Dell, Intel, Keysight Technologies, Marvell, NVIDIA, Red Hat, Tencent, and ZTE. The general members are DreamBig Semiconductor, Fujitsu, Hewlett Packard Enterprise (HPE), SolidRun, and UnifabriX.
  • A data processing unit is, in many cases, a PCI (Peripheral Component Interconnect) card attached to the Peripheral Component Interconnect Express (PCIe) bus that has compute memory and storage on the card itself. It has its own endpoint. It’s own network identity. It’s a way to offload infrastructure workloads away from the compute and isolate them, so that the host can focus on processing the server workloads that it needs to process.
  • Hyperscalers built their own non-standard frameworks, so the OPI Project is trying to create standard APIs. The ease of development and ease of deployment of the devices will drive efficiency and cost savings in large computing environments.
  • The OPI Project has been working closely with SmartNICs Summit, SONiC-DASH, and Storage Networking Industry Association (SNIA), presenting at the conferences and bringing the OPI experience to their presentations. It has also submitted sessions to the Open Compute Project (OCP) and is having early discussions with LF Edge.
  • Working groups: The API and Behavioral Model Working Group has worked on defining the taxonomy and the schema that will be used for APIs. The Use Case working group is working with deployment partners, i.e., end users of the cards which are tier 1 or tier 2 cloud providers. The Dev Platform / PoC Working Group is building a way to test and work with these solutions.
  • They are actively looking for new members to help define these frameworks and write the code. Perfect for the project are: 1) Vendors that make the cards. There are pieces and parts of using a DPU that are common across all of these vendors, and they all have to be deployed. It makes sense to pool their efforts and define that once and then have each of them use those methods within their own stack. 2) Integrators that take a DPU and integrate it with Fujitsu, HPE, Dell or one of the server vendors. They have a vested interest in simplifying how they deploy different DPUs in their system hardware. 3) End users/deployment partners, which would be cloud providers at this point and enterprises.
  • Currently in the works: secure zero touch provisioning, IPsec, strongSwan implementation, storage, demos around how to offload storage from the host to the GPU, offload storage management tasks from there.
  • Currently available: A full simulated build environment that anybody can download and test out the code.
  • Next on the roadmap: Networking APIs and how to deploy a workload on to a GPU.

This summary was written by Camille Gregory.

You may also like