bigscience-workshop/petals

🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading

54
/ 100
Established

This project helps you run and customize powerful large language models (LLMs) like Llama 3.1 or Mixtral on your personal computer, even if you don't have super-expensive hardware. You provide a prompt or data for fine-tuning, and it generates text or a specialized model. It's for researchers, developers, or hobbyists who want to experiment with advanced AI models without needing a supercomputer.

9,997 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.

Use this if you want to access and fine-tune massive language models for text generation or specific tasks using your existing computer, leveraging a distributed network of GPUs.

Not ideal if your data is highly sensitive and cannot be processed by a public network, or if you require guaranteed low-latency inference for real-time production systems without contributing GPU resources.

large-language-models natural-language-processing ai-experimentation distributed-computing model-fine-tuning
Stale 6m
Maintenance 0 / 25
Adoption 11 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

9,997

Forks

595

Language

Python

License

MIT

Last pushed

Sep 07, 2024

Commits (30d)

0

Dependencies

18

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bigscience-workshop/petals"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.