Beaverton, OR – April 8, 2025 – The UALink Consortium at present introduced the ratification of the UALink 200G 1.0 Specification, which defines a low-latency, high-bandwidth interconnect for communication between accelerators and switches in AI computing pods.
The UALink 1.0 Specification allows 200G per lane scale-up connection for as much as 1,024 accelerators inside an AI computing pod, delivering the open normal interconnect for next-generation AI cluster efficiency.
“Because the demand for AI compute grows, we’re delighted to ship a necessary, open business normal expertise that allows next-generation AI/ML purposes to the market,” mentioned Kurtis Bowman, UALink Consortium Board Chair. “UALink is the one reminiscence semantic resolution for scale-up AI optimized for decrease energy, latency and value whereas rising efficient bandwidth. The groundbreaking efficiency made attainable with the UALink 200G 1.0 Specification will revolutionize how Cloud Service Suppliers, System OEMs, and IP/Silicon Suppliers method AI workloads.”
UALink creates a change ecosystem for accelerators – supporting vital efficiency for rising AI and HPC workloads. It allows accelerator-to-accelerator communication throughout system nodes utilizing learn, write, and atomic transactions and defines a set of protocols and interfaces enabling the creation of multi-node techniques for AI purposes.
Options:
- Efficiency
- Low-latency, high-bandwidth interconnect for lots of of accelerators in a pod.
- Offers a easy load/retailer protocol with the identical uncooked velocity as Ethernet with the latency of PCIe® switches.
- Designed for deterministic efficiency reaching 93% efficient peak bandwidth.
- Energy
- Permits a extremely environment friendly change design that reduces energy and complexity.
- Value
- Makes use of considerably smaller die space for hyperlink stack, decreasing energy and acquisition prices, leading to decreased Complete Value of Possession (TCO).
- Elevated bandwidth effectivity additional allows decrease TCO.
- Open
- A number of distributors are creating UALink accelerators and switches.
- Harnesses member firm innovation to drive modern options into the specification and interoperable merchandise to the market.
“AI is advancing at an unprecedented tempo, ushering in a brand new period of AI reasoning with new scaling legal guidelines. Because the demand for compute surges and velocity necessities proceed to develop exponentially, scale-up interconnect options should evolve to maintain tempo with these quickly altering AI workload necessities,” mentioned Sameh Boujelbene, VP at Dell’Oro Group. “We’re thrilled to see the discharge of the UALink 1.0 Specification, which rises to this problem by enabling 200G per lane scale-up connections for as much as 1,024 accelerators inside the similar AI computing pod. This milestone marks a big step ahead in addressing the demand of next-generation AI infrastructure.”
“With the discharge of the UALink 200G 1.0 Specification, the UALink Consortium’s member corporations are actively constructing an open ecosystem for scale-up accelerator connectivity,” mentioned Peter Onufryk, UALink Consortium President. “We’re excited to witness the number of options that can quickly be getting into the market and enabling future AI purposes.”
The UALink 200G 1.0 Specification is out there for public obtain at https://ualinkconsortium.org/specification/.