Latest developments in low-bit quantization for LLMs, like AQLM and AutoRound, at the moment are displaying acceptable ranges of degradation in downstream duties, particularly for giant fashions. That mentioned, 2-bit quantization nonetheless introduces noticeable accuracy loss normally.
One promising algorithm for low-bit quantization is VPTQ (MIT license), proposed by Microsoft. It was launched in October 2024 and has since proven wonderful efficiency and effectivity in quantizing giant fashions.
On this article, we’ll:
- Evaluation the VPTQ quantization algorithm.
- Reveal the way to use VPTQ fashions, lots of that are already out there. As an illustration, we are able to simply discover low-bit variants of Llama 3.3 70B, Llama 3.1 405B, and Qwen2.5 72B.
- Consider these fashions and talk about the outcomes to know when VPTQ fashions could be a good selection for LLMs in manufacturing.
Remarkably, 2-bit quantization with VPTQ virtually achieves efficiency akin to the unique 16-bit mannequin on duties equivalent to MMLU. Furthermore, it permits working Llama 3.1 405B on a single GPU, whereas utilizing much less reminiscence than a 70B mannequin!