Close Menu
    Trending
    • AI-Powered Content Creation Gives Your Docs and Slides New Life
    • AI is nothing but all Software Engineering: you have no place in the industry without software engineering | by Irfan Ullah | Aug, 2025
    • Robot Videos: World Humanoid Robot Games, RoboBall, More
    • I Risked Everything to Build My Company. Four Years Later, Here’s What I’ve Learned About Building Real, Lasting Success
    • Tried an AI Text Humanizer That Passes Copyscape Checker
    • 🔴 20 Most Common ORA- Errors in Oracle Explained in Details | by Pranav Bakare | Aug, 2025
    • The AI Superfactory: NVIDIA’s Multi-Data Center ‘Scale Across’ Ethernet
    • Apple TV+ raises subscription prices worldwide, including in UK
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Intro to NCCL: Efficient Multi-GPU Communication for Distributed Training | by Huayu Zhang | May, 2025
    Machine Learning

    Intro to NCCL: Efficient Multi-GPU Communication for Distributed Training | by Huayu Zhang | May, 2025

    Team_AIBS NewsBy Team_AIBS NewsMay 24, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    NVSwitch connects all GPUs in a node by way of a swap cloth, preserving NVLink’s point-to-point nature whereas providing uniform all-to-all communication

    all-2-all connection primarily based on NVSwitch

    NVSwitch Execs & cons

    Execs:

    • Full bandwidth, all-to-all connections
    • Uniform latency and no routing complexity
    • Simplifies NCCL collective operations
    • Permits environment friendly tensor-parallel/model-parallel coaching
    • Important for DGX, HGX, and NVIDIA SuperPOD deployments

    Cons:

    • Costly and power-hungry
    • Requires specialised {hardware}/chassis (e.g., DGX — NVIDIA’s high-end server with 8 GPUs related by way of NVSwitch, optimized for large-scale deep studying)

    Multi-Host Setup with InfiniBand + NVLink

    In a multi-host NVSwitch-based GPU system, GPUs and the NVSwitch don’t immediately connect with the community interface. As a substitute, GPUs entry the NIC (e.g., ConnectX-7) by way of the host’s PCIe subsystem. For inter-node communication, every NIC is related by way of InfiniBand, forming a high-speed cluster cloth. GPUDirect RDMA permits the NIC to carry out direct reminiscence entry (DMA) to and from GPU reminiscence over PCIe, fully bypassing the CPU and system reminiscence. This zero-copy switch path considerably reduces communication latency and CPU overhead, enabling environment friendly, large-scale distributed coaching throughout a number of GPU servers. The mixture of NVSwitch (for intra-node) and GPUDirect RDMA (for inter-node) gives a seamless and high-performance information path for collective operations like NCCL’s all_reduce.

    We show the all_reduce operation on 4 GPUs, every holding the vector [x0–x7], break up into 4 chunks.

    Every GPU begins with the total tensor of 8 components, divided into 4 chunks:

    • chunk0: indices [0,1]
    • chunk1: indices [2,3]
    • chunk2: indices [4,5]
    • chunk3: indices [6,7]

    Every GPU will cut back one distinctive chunk by summing that chunk from all GPUs.

    Every GPU now owns one totally diminished chunk. These are shared to all different GPUs in a hoop sample over 3 steps.

    All GPUs now maintain the whole diminished tensor.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow To Optimize Solar BOS For Value and Efficiency
    Next Article Why Every Company Should Have a 90-Day Cash Flow Buffer
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    AI is nothing but all Software Engineering: you have no place in the industry without software engineering | by Irfan Ullah | Aug, 2025

    August 22, 2025
    Machine Learning

    🔴 20 Most Common ORA- Errors in Oracle Explained in Details | by Pranav Bakare | Aug, 2025

    August 22, 2025
    Machine Learning

    Data Analysis Lecture 2 : Getting Started with Pandas | by Yogi Code | Coding Nexus | Aug, 2025

    August 22, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    AI-Powered Content Creation Gives Your Docs and Slides New Life

    August 22, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Uber probed by US consumer watchdog over subscription plan

    December 15, 2024

    Başarı mı, Hile mi? Modelinizi Yanıltan Veri Sızıntısının (Data Leakage) Perde Arkası — Part 2 | by Karadenizelif | Jun, 2025

    June 3, 2025

    Google Plans to Roll Out Gemini A.I. Chatbot to Children Under 13

    May 2, 2025
    Our Picks

    AI-Powered Content Creation Gives Your Docs and Slides New Life

    August 22, 2025

    AI is nothing but all Software Engineering: you have no place in the industry without software engineering | by Irfan Ullah | Aug, 2025

    August 22, 2025

    Robot Videos: World Humanoid Robot Games, RoboBall, More

    August 22, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.