zetavg/LLaMA-LoRA-Tuner

UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.

42
/ 100
Emerging

This tool helps AI practitioners and researchers easily fine-tune and evaluate large language models like LLaMA using low-rank adaptation (LoRA). You can provide your own text datasets in common formats and get back specialized models. It's designed for individuals who want to quickly adapt existing language models for specific tasks or datasets without deep technical setup.

476 stars. No commits in the last 6 months.

Use this if you need to customize LLaMA, GPT-J, or similar language models with your own data for a specific use case and want a user-friendly interface to do so.

Not ideal if you require full control over the underlying model architecture or need to fine-tune models other than those supported (LLaMA, GPT-J, Dolly-V2, Pythia).

AI model customization natural language processing machine learning research language model training dataset adaptation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 24 / 25

How are scores calculated?

Stars

476

Forks

98

Language

Python

License

Last pushed

May 29, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zetavg/LLaMA-LoRA-Tuner"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.