jwest33/lora_craft

An open-source web application for fine-tuning large language models using Low-Rank Adaptation (LoRA) and Group Relative Policy Optimization (GRPO). Built to make fine-tuning accessible!

21
/ 100
Experimental

This tool helps non-developers train large language models for specific tasks like math reasoning or code generation. You provide a base language model and a dataset (either pre-configured or your own), and it produces a fine-tuned model ready for specialized use. It's designed for anyone who wants to customize an LLM's behavior without needing deep machine learning expertise.

No commits in the last 6 months.

Use this if you need to quickly adapt a language model to perform a particular task or follow specific instructions using a simple web interface.

Not ideal if you lack access to a powerful GPU or require extensive, highly customized model architecture modifications beyond fine-tuning.

AI model customization language model training natural language processing data science workflow AI application development
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 4 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

MIT

Category

llm-fine-tuning

Last pushed

Oct 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/jwest33/lora_craft"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.