rasbt/dora-from-scratch
LoRA and DoRA from Scratch Implementations
This project offers practical implementations of LoRA and DoRA techniques. It helps machine learning practitioners fine-tune large language models more efficiently, reducing the computational resources and time needed. If you're working with large pre-trained models and need to adapt them to specific tasks or datasets, this resource provides the methods to do so effectively.
217 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking to apply parameter-efficient fine-tuning techniques like LoRA or DoRA to large models without extensive computational power.
Not ideal if you are looking for a high-level API or a solution for training models from scratch, as this focuses on the underlying mechanics of specific fine-tuning methods.
Stars
217
Forks
18
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rasbt/dora-from-scratch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.