AGI-Edgerunners/LLM-Adapters

Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"

45
/ 100
Emerging

This project helps machine learning engineers efficiently customize large language models (LLMs) for specific tasks without needing massive computational resources. It takes an existing LLM (like LLaMa, OPT, or BLOOM) and a specific dataset, then outputs a specialized model ready for tasks like arithmetic reasoning or common sense understanding. The ideal user is an ML engineer or researcher working with LLMs who needs to fine-tune them for niche applications.

1,229 stars. No commits in the last 6 months.

Use this if you need to adapt a large language model to perform better on a specific task or dataset, but want to avoid the high computational cost and time of full fine-tuning.

Not ideal if you need to train a large language model completely from scratch, or if you prefer full fine-tuning over parameter-efficient methods.

Large Language Models Model Customization Natural Language Processing Machine Learning Engineering AI Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,229

Forks

120

Language

Python

License

Apache-2.0

Last pushed

Mar 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AGI-Edgerunners/LLM-Adapters"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.