jwest33/lora_craft
An open-source web application for fine-tuning large language models using Low-Rank Adaptation (LoRA) and Group Relative Policy Optimization (GRPO). Built to make fine-tuning accessible!
This tool helps non-developers train large language models for specific tasks like math reasoning or code generation. You provide a base language model and a dataset (either pre-configured or your own), and it produces a fine-tuned model ready for specialized use. It's designed for anyone who wants to customize an LLM's behavior without needing deep machine learning expertise.
No commits in the last 6 months.
Use this if you need to quickly adapt a language model to perform a particular task or follow specific instructions using a simple web interface.
Not ideal if you lack access to a powerful GPU or require extensive, highly customized model architecture modifications beyond fine-tuning.
Stars
8
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/jwest33/lora_craft"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
daekeun-ml/genai-ko-LLM
This hands-on lab walks you through a step-by-step approach to efficiently serving and...
keanteng/sesame-csm-elise
Fine-Tuning Sesame CSM Wth Elise. Enjoy the voice ( ̄︶ ̄)↗
ksm26/Quantization-Fundamentals-with-Hugging-Face
Learn linear quantization techniques using the Quanto library and downcasting methods with the...
just4give/llm-sagemaker-fargate-api
This repository contains two major projects that work together to deploy and serve Large...
simran-padam/FineTuningLlama
FineTuning Llama to create a versatile chatbot