bitsandbytes-foundation/bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.

77
/ 100
Verified

Working with large language models (LLMs) can be challenging due to their high memory demands, which often exceed the capabilities of standard hardware. This tool helps you efficiently run and train these powerful models by dramatically reducing the amount of computer memory they consume. It takes a large language model and outputs the same model, but optimized to use significantly less memory, allowing researchers and AI practitioners to work with advanced LLMs on more accessible computing resources.

8,033 stars. Used by 74 other packages. Actively maintained with 20 commits in the last 30 days. Available on PyPI.

Use this if you need to run or fine-tune large language models but are limited by your computer's memory capacity, especially on GPUs.

Not ideal if you are working with smaller models that don't face memory constraints or if you prefer to work exclusively with 32-bit precision for specific research reasons.

large-language-models deep-learning model-optimization AI-research natural-language-processing
Maintenance 17 / 25
Adoption 15 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

8,033

Forks

831

Language

Python

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

20

Dependencies

3

Reverse dependents

74

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bitsandbytes-foundation/bitsandbytes"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.