aamanlamba/phi3-tune-payments
Bidirectional fine-tuning of Microsoft's Phi-3-Mini model for payment transaction processing using LoRA. Includes forward (structured→NL) and reverse (NL→structured) models. Optimized for NVIDIA RTX 3060 (12GB VRAM). 500 synthetic examples, ~95% accuracy, 30-60min training time.
Stars
2
Forks
—
Language
Python
License
—
Category
Last pushed
Oct 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/aamanlamba/phi3-tune-payments"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
gustavecortal/gpt-j-fine-tuning-example
Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
Ebimsv/LLM-Lab
Pretraining and Finetuning Language Model
msmrexe/pytorch-lora-from-scratch
A from-scratch PyTorch implementation of Low-Rank Adaptation (LoRA) to efficiently fine-tune...
linhaowei1/Fine-tuning-Scaling-Law
🌹[ICML 2024] Selecting Large Language Model to Fine-tune via Rectified Scaling Law
HamzahDrawsheh/fine-tuning-and-instruction-tuning-of-large-language-models
This project demonstrates the use of Large Language Models (LLMs) for Natural Language...