bupticybee/FastLoRAChat

Instruct-tune LLaMA on consumer hardware with shareGPT data

30
/ 100
Emerging

This project helps developers and researchers fine-tune large language models (LLMs) like LLaMA for multi-round, multi-language chat on more affordable consumer graphics cards. It takes a base LLaMA model and conversational datasets (like ShareGPT) as input, then outputs a specialized chat model ready for deployment. This is ideal for those who want to customize LLMs without needing expensive, high-end hardware.

125 stars. No commits in the last 6 months.

Use this if you need to adapt a LLaMA model for specific conversational tasks or domains using readily available, lower-cost GPUs.

Not ideal if you're looking for an out-of-the-box solution that doesn't require any technical setup or custom training.

AI-model-customization conversational-AI-development language-model-training GPU-optimization AI-research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

125

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Apr 20, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/bupticybee/FastLoRAChat"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.