Close Menu
    Trending
    • An Introduction to Remote Model Context Protocol Servers
    • Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025
    • AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?
    • Why Your Finance Team Needs an AI Strategy, Now
    • How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1
    • From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    • Cuba’s Energy Crisis: A Systemic Breakdown
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Data Science»UALink Consortium Releases Ultra Accelerator Link 200G 1.0 Spec
    Data Science

    UALink Consortium Releases Ultra Accelerator Link 200G 1.0 Spec

    Team_AIBS NewsBy Team_AIBS NewsApril 9, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Beaverton, OR – April 8, 2025 – The UALink Consortium at present introduced the ratification of the UALink 200G 1.0 Specification, which defines a low-latency, high-bandwidth interconnect for communication between accelerators and switches in AI computing pods.

    The UALink 1.0 Specification allows 200G per lane scale-up connection for as much as 1,024 accelerators inside an AI computing pod, delivering the open normal interconnect for next-generation AI cluster efficiency.

    “Because the demand for AI compute grows, we’re delighted to ship a necessary, open business normal expertise that allows next-generation AI/ML purposes to the market,” mentioned Kurtis Bowman, UALink Consortium Board Chair. “UALink is the one reminiscence semantic resolution for scale-up AI optimized for decrease energy, latency and value whereas rising efficient bandwidth. The groundbreaking efficiency made attainable with the UALink 200G 1.0 Specification will revolutionize how Cloud Service Suppliers, System OEMs, and IP/Silicon Suppliers method AI workloads.”

    UALink creates a change ecosystem for accelerators – supporting vital efficiency for rising AI and HPC workloads. It allows accelerator-to-accelerator communication throughout system nodes utilizing learn, write, and atomic transactions and defines a set of protocols and interfaces enabling the creation of multi-node techniques for AI purposes.

    Options:

    • Efficiency
      • Low-latency, high-bandwidth interconnect for lots of of accelerators in a pod.
      • Offers a easy load/retailer protocol with the identical uncooked velocity as Ethernet with the latency of PCIe® switches.
      • Designed for deterministic efficiency reaching 93% efficient peak bandwidth.
    • Energy
      • Permits a extremely environment friendly change design that reduces energy and complexity.
    • Value
      • Makes use of considerably smaller die space for hyperlink stack, decreasing energy and acquisition prices, leading to decreased Complete Value of Possession (TCO).
      • Elevated bandwidth effectivity additional allows decrease TCO.
    • Open
      • A number of distributors are creating UALink accelerators and switches.
      • Harnesses member firm innovation to drive modern options into the specification and interoperable merchandise to the market.

    “AI is advancing at an unprecedented tempo, ushering in a brand new period of AI reasoning with new scaling legal guidelines. Because the demand for compute surges and velocity necessities proceed to develop exponentially, scale-up interconnect options should evolve to maintain tempo with these quickly altering AI workload necessities,” mentioned Sameh Boujelbene, VP at Dell’Oro Group. “We’re thrilled to see the discharge of the UALink 1.0 Specification, which rises to this problem by enabling 200G per lane scale-up connections for as much as 1,024 accelerators inside the similar AI computing pod. This milestone marks a big step ahead in addressing the demand of next-generation AI infrastructure.”

     “With the discharge of the UALink 200G 1.0 Specification, the UALink Consortium’s member corporations are actively constructing an open ecosystem for scale-up accelerator connectivity,” mentioned Peter Onufryk, UALink Consortium President. “We’re excited to witness the number of options that can quickly be getting into the market and enabling future AI purposes.”

    The UALink 200G 1.0 Specification is out there for public obtain at https://ualinkconsortium.org/specification/.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleJustice Dept. Disbands Cryptocurrency Enforcement Unit
    Next Article Augment, Expand, Improve: Synthetic Image Generation for Robust Classification | by Raghav Mittal | Apr, 2025
    Team_AIBS News
    • Website

    Related Posts

    Data Science

    AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?

    July 2, 2025
    Data Science

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025
    Data Science

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    An Introduction to Remote Model Context Protocol Servers

    July 2, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Meta CEO Mark Zuckerberg Wants You to Make AI Friends

    May 8, 2025

    Papers Explained 278: Phi-4. Phi-4 is a 14B parameter model that… | by Ritvik Rastogi | Dec, 2024

    December 24, 2024

    LLaDA: The Diffusion Model That Could Redefine Language Generation

    February 26, 2025
    Our Picks

    An Introduction to Remote Model Context Protocol Servers

    July 2, 2025

    Blazing-Fast ML Model Serving with FastAPI + Redis (Boost 10x Speed!) | by Sarayavalasaravikiran | AI Simplified in Plain English | Jul, 2025

    July 2, 2025

    AI Knowledge Bases vs. Traditional Support: Who Wins in 2025?

    July 2, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.