Llm Fine Tuning Optimization Transformer Models
There are 9 llm fine tuning optimization models tracked. The highest-rated is sandseb123/local-lora-cookbook at 37/100 with 13 stars.
Get all 9 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=llm-fine-tuning-optimization&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
sandseb123/local-lora-cookbook
Fine-tune a local LLM on your own app's data in 15 minutes. Runs entirely... |
|
Emerging |
| 2 |
avnlp/llm-blender
LLM-Blender: Ensembling framework that maximizes LLM performance via... |
|
Emerging |
| 3 |
RufelleEmmanuelPactol/Mixture-of-Experts-Transcript-Evaluator
A mixture of experts inspired transcript evaluator using LLM fine-tuning.... |
|
Experimental |
| 4 |
gulabpatel/LLMs
Alpaca, Bloom, DeciLM, Falcon, Vicuna, Llama2, Zephyr, Mistral(MoE), RAG,... |
|
Experimental |
| 5 |
abhisheksingh-7/cotrend
Extending Decoders with an Integrated Encoder, as Part of Llama-3 Hackathon |
|
Experimental |
| 6 |
CatnipCoders/Lambda-Driver
Lambda-Driver optimizes a small pre-trained model for resource-constrained... |
|
Experimental |
| 7 |
d-f/llm-summarization
LoRA supervised fine-tuning, RLHF (PPO) and RAG with llama-3-8B on the TLDR... |
|
Experimental |
| 8 |
DatarConsulting/Vashista-Sparse-Attention
Reproducibility notebook for Vashista Sparse Attention : constant-in-context... |
|
Experimental |
| 9 |
gstenzel/PyTruffle
Block-level Code Retrieval using LLMs |
|
Experimental |