huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
This project helps machine learning practitioners adapt large AI models, like those used for text generation or image creation, to new, specific tasks without needing immense computing power. You provide a pre-trained model and a small dataset for your specific use case, and it outputs a compact 'adapter' that tailors the model's behavior. This is ideal for anyone working with large language models or diffusion models who needs to customize them for unique applications like specialized chatbots or custom image styles.
20,777 stars. Used by 83 other packages. Actively maintained with 31 commits in the last 30 days. Available on PyPI.
Use this if you need to customize a large pre-trained AI model for a specific task or dataset but are constrained by computing resources or storage.
Not ideal if you are developing a model from scratch or performing full-scale foundational model training, as this focuses on efficient adaptation rather than initial model development.
Stars
20,777
Forks
2,211
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
31
Dependencies
10
Reverse dependents
83
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/huggingface/peft"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Compare
Related models
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)