leehanchung/lora-instruct

Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA

42
/ 100
Emerging

This project helps machine learning engineers customize large language models (LLMs) like RedPajama to perform specific instruction-following tasks. It takes an existing base LLM and a dataset of desired instruction-response pairs as input. The output is a specialized LLM capable of generating more accurate and relevant responses for your particular use case, even on consumer-grade hardware.

104 stars. No commits in the last 6 months.

Use this if you need to adapt a pre-trained large language model to a specific set of instructions or a particular domain without incurring the high computational costs of full fine-tuning.

Not ideal if you require training a large language model from scratch or if you need to fine-tune models not currently supported, such as encoder-decoder architectures.

large-language-models model-customization natural-language-processing machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

104

Forks

15

Language

Python

License

Apache-2.0

Last pushed

May 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/leehanchung/lora-instruct"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.