microsoft/LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

57
/ 100
Established

This project provides a way for machine learning engineers and researchers to adapt large language models (LLMs) to specific tasks more efficiently. Instead of retraining the entire model, it allows you to fine-tune only a small fraction of its parameters. This means you can take a pre-trained LLM and quickly customize it for a new job, receiving a task-specific model that performs comparably to a fully fine-tuned one, but with significantly less computational cost and storage.

13,320 stars. Used by 4 other packages. No commits in the last 6 months. Available on PyPI.

Use this if you need to adapt large, pre-trained language models for new, specific natural language processing tasks while drastically reducing training time, computational resources, and storage footprint.

Not ideal if you are developing models from scratch or if your tasks do not involve adapting existing large language models.

natural-language-processing large-language-models model-adaptation machine-learning-engineering AI-research
Stale 6m
Maintenance 0 / 25
Adoption 14 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

13,320

Forks

888

Language

Python

License

MIT

Last pushed

Dec 17, 2024

Commits (30d)

0

Reverse dependents

4

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/LoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.