rasbt/dora-from-scratch

LoRA and DoRA from Scratch Implementations

39
/ 100
Emerging

This project offers practical implementations of LoRA and DoRA techniques. It helps machine learning practitioners fine-tune large language models more efficiently, reducing the computational resources and time needed. If you're working with large pre-trained models and need to adapt them to specific tasks or datasets, this resource provides the methods to do so effectively.

217 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking to apply parameter-efficient fine-tuning techniques like LoRA or DoRA to large models without extensive computational power.

Not ideal if you are looking for a high-level API or a solution for training models from scratch, as this focuses on the underlying mechanics of specific fine-tuning methods.

Machine-Learning-Fine-Tuning Large-Language-Models AI-Model-Adaptation Deep-Learning-Optimization Parameter-Efficient-Training
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

217

Forks

18

Language

Jupyter Notebook

License

MIT

Category

llm-fine-tuning

Last pushed

Mar 05, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rasbt/dora-from-scratch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.