Close Menu
    Trending
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    • Millions of websites to get ‘game-changing’ AI bot blocker
    • I Worked Through Labor, My Wedding and Burnout — For What?
    • Cloudflare will now block AI bots from crawling its clients’ websites by default
    • 🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Breaking the Bottleneck: GPU-Optimised Video Processing for Deep Learning
    Artificial Intelligence

    Breaking the Bottleneck: GPU-Optimised Video Processing for Deep Learning

    Team_AIBS NewsBy Team_AIBS NewsFebruary 26, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Deep Learning (DL) purposes usually require processing video knowledge for duties akin to object detection, classification, and segmentation. Nevertheless, standard video processing pipelines are sometimes inefficient for deep studying inference, resulting in efficiency bottlenecks. On this put up will leverage PyTorch and FFmpeg with NVIDIA {hardware} acceleration to attain this optimisation.

    The inefficiency comes from how video frames are sometimes decoded and transferred between CPU and GPU. The usual workflow that we might discover within the majority of tutorials observe this construction:

    1. Decode Frames on CPU: Video information are first decoded into uncooked frames utilizing CPU-based decoding instruments (e.g., OpenCV, FFmpeg with out GPU assist).
    2. Switch to GPU: These frames are then transferred from CPU to GPU reminiscence to carry out deep studying inference utilizing frameworks like TensorFlow, Pytorch, ONNX, and many others.
    3. Inference on GPU: As soon as the frames are in GPU reminiscence, the mannequin performs inference.
    4. Switch Again to CPU (if wanted): Some post-processing steps might require knowledge to be moved again to the CPU.

    This CPU-GPU switch course of introduces a major efficiency bottleneck, particularly when processing high-resolution movies at excessive body charges. The pointless reminiscence copies and context switches decelerate the general inference pace, limiting real-time processing capabilities.

    For example, the next snippet has the standard Video Processing pipeline that you simply got here throughout if you find yourself beginning to be taught deep studying:

    The Resolution: GPU-Based mostly Video Decoding and Inference

    A extra environment friendly method is to hold all the pipeline on the GPU, from video decoding to inference, eliminating redundant CPU-GPU transfers. This may be achieved utilizing FFmpeg with NVIDIA GPU {hardware} acceleration. 

    Key Optimisations

    1. GPU-Accelerated Video Decoding: As a substitute of utilizing CPU-based decoding, we leverage FFmpeg with NVIDIA GPU acceleration (NVDEC) to decode video frames immediately on the GPU.
    2. Zero-Copy Body Processing: The decoded frames stay in GPU reminiscence, avoiding pointless reminiscence transfers.
    3. GPU-Optimized Inference: As soon as the frames are decoded, we carry out inference immediately utilizing any mannequin on the identical GPU, considerably decreasing latency.

    Arms on! 

    Stipulations 

     In an effort to obtain the aforementioned enhancements, we shall be utilizing the next dependencies: 

    Set up

    Please, to get a deep perception of how FFmpeg is put in with NVIDIA gpu acceleration, observe these instructions. 

    Examined with:

    • System: Ubuntu 22.04
    • NVIDIA Driver Model: 550.120 
    • CUDA Model: 12.4
    • Torch: 2.4.0
    • Torchaudio: 2.4.0
    • Torchvision: 0.19.0

    1. Set up the NV-Codecs

    2. Clone and configure FFmpeg

    3. Validate whether or not the set up was profitable with torchaudio.utils

    Time to code an optimised pipeline!

    Benchmarking

    To benchmark whether or not it’s making any distinction, we shall be utilizing this video from Pexels by Pawel Perzanowski. Since most movies there are actually brief, I’ve stacked the identical video a number of instances to offer some outcomes with completely different video lengths. The unique video is 32 seconds lengthy which provides us a complete of 960 frames. The brand new modified movies have 5520 and 9300 frames respectively.

    Authentic video

    • typical workflow: 28.51s
    • optimised workflow: 24.2s

    Okay… it doesn’t look like an actual enchancment, proper? Let’s take a look at it with longer movies.

    Modified video v1 (5520 frames)

    • typical workflow: 118.72s
    • optimised workflow: 100.23s

    Modified video v2 (9300 frames)

    • typical workflow: 292.26s
    • optimised workflow: 240.85s

    Because the video period will increase, the advantages of the optimization change into extra evident. Within the longest take a look at case, we obtain an 18% speedup, demonstrating a major discount in processing time. These efficiency positive factors are notably essential when dealing with giant video datasets and even in real-time video evaluation duties, the place small effectivity enhancements accumulate into substantial time financial savings.

    Conclusion

    In right this moment’s put up, we’ve got explored two video processing pipelines, the standard one the place frames are copied from CPU to GPU, introducing noticeable bottlenecks, and an optimised pipeline, by which frames are decoded within the GPU and go them on to inference, saving a significantly period of time as movies’ period improve.

    References



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIs RNN or LSTM better for time series predictions? | by Katy | Feb, 2025
    Next Article Joann Will Shutter All of Its 800 U.S. Stores, Conduct Sales
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Implementing IBCS rules in Power BI

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Beyond the Buzz: Unpacking True GPU Utilization and Why Cold Starts Still Haunt LLMs | by Prashanth Manohar | May, 2025

    May 24, 2025

    🐍 9 Python One-Liners That Will Make You Feel Like a Wizard | by Kuldeepkumawat | May, 2025

    May 15, 2025

    Sam Altman’s Startup Brings Eyeball Scanning Orbs to the US

    May 2, 2025
    Our Picks

    Implementing IBCS rules in Power BI

    July 1, 2025

    What comes next for AI copyright lawsuits?

    July 1, 2025

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.