Close Menu
    Trending
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    • Millions of websites to get ‘game-changing’ AI bot blocker
    • I Worked Through Labor, My Wedding and Burnout — For What?
    • Cloudflare will now block AI bots from crawling its clients’ websites by default
    • 🚗 Predicting Car Purchase Amounts with Neural Networks in Keras (with Code & Dataset) | by Smruti Ranjan Nayak | Jul, 2025
    • Futurwise: Unlock 25% Off Futurwise Today
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»2-bit VPTQ: 6.5x Smaller LLMs while Preserving 95% Accuracy
    Artificial Intelligence

    2-bit VPTQ: 6.5x Smaller LLMs while Preserving 95% Accuracy

    Team_AIBS NewsBy Team_AIBS NewsFebruary 1, 2025No Comments1 Min Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Very correct 2-bit quantization for working 70B LLMs on a 24 GB GPU

    Towards Data Science

    Generated with ChatGPT

    Latest developments in low-bit quantization for LLMs, like AQLM and AutoRound, at the moment are displaying acceptable ranges of degradation in downstream duties, particularly for giant fashions. That mentioned, 2-bit quantization nonetheless introduces noticeable accuracy loss normally.

    One promising algorithm for low-bit quantization is VPTQ (MIT license), proposed by Microsoft. It was launched in October 2024 and has since proven wonderful efficiency and effectivity in quantizing giant fashions.

    On this article, we’ll:

    1. Evaluation the VPTQ quantization algorithm.
    2. Reveal the way to use VPTQ fashions, lots of that are already out there. As an illustration, we are able to simply discover low-bit variants of Llama 3.3 70B, Llama 3.1 405B, and Qwen2.5 72B.
    3. Consider these fashions and talk about the outcomes to know when VPTQ fashions could be a good selection for LLMs in manufacturing.

    Remarkably, 2-bit quantization with VPTQ virtually achieves efficiency akin to the unique 16-bit mannequin on duties equivalent to MMLU. Furthermore, it permits working Llama 3.1 405B on a single GPU, whereas utilizing much less reminiscence than a 70B mannequin!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDeep Seek: The AI Revolution That Will Redefine Human Knowledge | by Ali Kastan | Feb, 2025
    Next Article The Unspoken Truths of Startup Failures — 10 Cautionary Tales for Entrepreneurs
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    July 1, 2025
    Artificial Intelligence

    Prescriptive Modeling Makes Causal Bets – Whether You Know it or Not!

    June 30, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    What comes next for AI copyright lawsuits?

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Pressure grows to hold secret Apple privacy hearing in public

    March 13, 2025

    What will happen with the Equal Rights Amendment under Trump?

    March 10, 2025

    Building an AI Trading Agent That Adapts to Market Volatility in Real Time | by Sayantan Banerjee | Jun, 2025

    June 27, 2025
    Our Picks

    What comes next for AI copyright lawsuits?

    July 1, 2025

    Why PDF Extraction Still Feels LikeHack

    July 1, 2025

    GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.