LLM Finetuning Frameworks
Comprehensive platforms and toolkits for fine-tuning pre-trained large language models on custom datasets, including training orchestration, dataset curation, and model optimization. Does NOT include inference frameworks, model deployment tools, or general LLM training from scratch.
There are 67 llm finetuning frameworks tracked. 1 score above 50 (established tier). The highest-rated is limix-ldm-ai/LimiX at 54/100 with 3,340 stars.
Get all 67 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=ml-frameworks&subcategory=llm-finetuning-frameworks&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Framework | Score | Tier |
|---|---|---|---|
| 1 |
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist... |
|
Established |
| 2 |
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data. |
|
Emerging |
| 3 |
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of... |
|
Emerging |
| 4 |
YalaLab/pillar-finetune
Finetuning framework for Pillar medical imaging models. |
|
Emerging |
| 5 |
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for... |
|
Emerging |
| 6 |
joisino/reeval-wmd
Code for "Re-evaluating Word Mover’s Distance" (ICML 2022) |
|
Emerging |
| 7 |
Cloud-CV/diverse-beam-search
:mag: :shipit: Decoding Diverse Solutions from Neural Sequence Models |
|
Emerging |
| 8 |
santos-sanz/mlx-lora-finetune-template
Template for fine-tuning LLMs with LoRA using Apple MLX on Mac Silicon |
|
Emerging |
| 9 |
P1ayer-1/Llama-LibTorch
Llama causal LM fully recreated in LibTorch. Designed to be used in Unreal Engine 5 |
|
Emerging |
| 10 |
YalaLab/pillar-pretrain
This repository contains the pretraining code for the Pillar-0 model. |
|
Emerging |
| 11 |
gruai/koifish
A c++ framework on efficient training & fine-tuning LLMs |
|
Emerging |
| 12 |
yigitkonur/cli-finetune-dataset
weighted category-balanced dataset builder for LLM fine-tuning |
|
Emerging |
| 13 |
tk-rusch/LEM
Official code for Long Expressive Memory (ICLR 2022, Spotlight) |
|
Emerging |
| 14 |
EngineeringSoftware/CoditT5
CoditT5: Pretraining for Source Code and Natural Language Editing |
|
Emerging |
| 15 |
furkantanyol/aitelier
An opinionated workflow tool for managing the full lifecycle of fine-tuning datasets |
|
Emerging |
| 16 |
ashworks1706/llm-from-scratch
A theoretical and practical deep dive into Large Language Models and their... |
|
Emerging |
| 17 |
jordandeklerk/Starcoder2-Finetune-Code-Completion
Finetuning Starcoder2-3B for Code Completion on a single A100 GPU |
|
Experimental |
| 18 |
OSU-MLB/Fine-Tuning-Is-Fine-If-Calibrated
Official Implementation of "Fine-Tuning is Fine, if Calibrated.", NeurIPS 2024 |
|
Experimental |
| 19 |
BioDT/bfm-finetune
Finetune routines for the Biodiveristy Foundation Model |
|
Experimental |
| 20 |
machelreid/lewis
Official code for LEWIS, from: "LEWIS: Levenshtein Editing for Unsupervised... |
|
Experimental |
| 21 |
machinelearningnuremberg/QuickTune
[ICLR2024] Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How |
|
Experimental |
| 22 |
KazKozDev/synth-dataset-kit
CLI tool for generating high-quality synthetic datasets for LLM fine-tuning. |
|
Experimental |
| 23 |
mcaimi/flan-t5-finetune-ita
This repository has been moved to... |
|
Experimental |
| 24 |
aimonlabs/hallucination-detection-model
HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification |
|
Experimental |
| 25 |
eshanized/SLMGen
Fine-tune small language models the right way — dataset intelligence,... |
|
Experimental |
| 26 |
Aliyan-12/deepseek-finetuning---llama-rag---whisper-reasoning---colab
PEFT(Parameter Efficient Fine-tuning) workflow for Unsloth/DeepSeek-R1 on... |
|
Experimental |
| 27 |
loevlie/neuropt
LLM-guided ML optimization. Point it at a training script, it reads the... |
|
Experimental |
| 28 |
mamoun78444/llm-json
Parse JSON quickly using a fast, recursive-descent parser designed for... |
|
Experimental |
| 29 |
SunPCSolutions/FinetuneOrch
FineTuneOrch is a web-based orchestration dashboard that simplifies... |
|
Experimental |
| 30 |
Lukin-GCST/mgs-llm-stability-sensor
Geometric stability sensor for detecting hallucinations in LLM outputs |
|
Experimental |
| 31 |
rickiepark/fine-tuning-llm
|
|
Experimental |
| 32 |
rpatrik96/hallmark
HALLMARK: Citation hallucination detection benchmark for ML papers — 2,525... |
|
Experimental |
| 33 |
frafalcone/llm-design-train
A PyTorch implementation of a LLaMA-inspired LLM, featuring GQA, RoPE, and SwiGLU. |
|
Experimental |
| 34 |
Nagavenkatasai7/llm-forge
Config-driven, YAML-first open-source LLM training platform. Fine-tune... |
|
Experimental |
| 35 |
Radket27/Simple-LLM
Simple LLM |
|
Experimental |
| 36 |
teddante/Ensemble
A modern web application that queries multiple Large Language Models... |
|
Experimental |
| 37 |
Yog-Sotho/Brainbrew
A simple GUI tool that generates LLM training datasets through model... |
|
Experimental |
| 38 |
MaheshJakkala/llm-c-transformer
Transformer LLM from scratch in C: custom tensor lib, INT8 post-training... |
|
Experimental |
| 39 |
nshkrdotcom/vllm
vLLM - High-throughput, memory-efficient LLM inference engine with... |
|
Experimental |
| 40 |
hzwwww/LLM-From-Zero-to-Hero
这是一个从零开始系统学习LLM的实战项目,通过一系列精心设计的Jupyter Notebook,带你从基础理论到核心算法,逐步掌握LLM的关键技术 |
|
Experimental |
| 41 |
atasoglu/awesome-turkish-vlm
A curated list of models, datasets and other useful resources for Turkish... |
|
Experimental |
| 42 |
hesamsheikh/AnimAI-Trainer
Train an LLM to generate cracked Manim animations for mathematical concepts. |
|
Experimental |
| 43 |
matjsz/shard
Shard is an open-source LLM tuning package for Python, which can turn any... |
|
Experimental |
| 44 |
harshi1111/multi-granular-llm-analysis
Production-ready system for detecting WHERE LLM responses fail, not just IF... |
|
Experimental |
| 45 |
alexisbriandev/mini-llm
A modular, educational, and high-performance implementation of a... |
|
Experimental |
| 46 |
asoloveii/nano-llm
An implementation of a custom language model from scratch in PyTorch.... |
|
Experimental |
| 47 |
originaonxi/asm-replication
Replication study — Adaptive Skill Modeling for multi-task LLM training.... |
|
Experimental |
| 48 |
RodrigoVargasMolina/liteweight-pony-trainer-8g-safetensor
Lightweight SDXL LoRA trainer optimized for 8GB VRAM GPUs. GUI with... |
|
Experimental |
| 49 |
NeuroRaptor/clip-hallucination-detection
Evidence-based hallucination detection framework for CLIP vision-language... |
|
Experimental |
| 50 |
Wasisange/llm-finetuning-toolkit
A toolkit for efficient fine-tuning of large language models on custom datasets. |
|
Experimental |
| 51 |
Joe-Naz01/SFTT_Trainer
This repository contains a comprehensive pipeline for fine-tuning Large... |
|
Experimental |
| 52 |
Mikeore/lumi-arch-research
Public research notes on compact architecture exploration for efficient... |
|
Experimental |
| 53 |
Prajit-Rahul/Lightweight-Multilingual-Translation-for-Edge-Devices
LoRA, distillation, quantization, and pruning for edge-friendly multilingual... |
|
Experimental |
| 54 |
Bender1011001/dual-system-architecture
Geometric sidecar for LLMs — uncensored + structured reasoning, zero... |
|
Experimental |
| 55 |
dakshjain-1616/gemma-3-12b-medical-sft
Fine-tunes google/gemma-3-12b-it with Unsloth SFT and LoRA (r=32, alpha=64)... |
|
Experimental |
| 56 |
Restroulner/LLM-Fine-tuning-Toolkit
A comprehensive toolkit for fine-tuning Large Language Models (LLMs) with... |
|
Experimental |
| 57 |
Manchery/awesome-visual-tokenizer
[WIP🚧] 2025 up-to-date list of resources on visual tokenizers (primarily for... |
|
Experimental |
| 58 |
umarmk/llm-fine-tuning-phi3
Fine tune LLM for people information extraction using Unsloth |
|
Experimental |
| 59 |
kantkrishan0206-crypto/LLM-building-a-Large-Language-Model-LLM-
is a comprehensive, educational project dedicated to building a Large... |
|
Experimental |
| 60 |
vaibhavnayak30/llm_finetuning
This repository offers concise code for LLM fine-tuning to efficiently adapt... |
|
Experimental |
| 61 |
slsandarubot/DeGAML-LLM
🚀 Enhance large language models with DeGAML-LLM, a meta-learning approach... |
|
Experimental |
| 62 |
ascorbic/transformer-fun
Run 🤗 transformers on Netlify |
|
Experimental |
| 63 |
debanjan06/Asr-Hallucination-Detection
🧠 ASR Hallucination Detection & Mitigation System Revolutionary multi-modal... |
|
Experimental |
| 64 |
beviah/GENbAIs
Bio-inspired adapters that improve foundation models beyond LoRA... |
|
Experimental |
| 65 |
nomadicsynth/linguaforge
A comprehensive script designed for training and fine-tuning machine learning models |
|
Experimental |
| 66 |
AierLab/ModelSL
Model-SL is an innovative project that implements Split Learning with... |
|
Experimental |
| 67 |
smebad/Fine-Tuning-Models
In this repository I will be fine-tuning the different open source models... |
|
Experimental |