Abhijeet-ist/FineTunning
This is a short script based on fine tuning a open sourced LLM based on customized parameters and data.
This script helps machine learning practitioners adapt existing large language models (LLMs) to perform specific tasks using their own custom datasets. You provide an instruction-response JSON dataset and a pre-trained LLM, and it outputs a smaller, specialized version of that model (LoRA adapters) that's better at your particular task. It's designed for data scientists and researchers who want to customize LLMs without needing extensive computational resources.
Use this if you need to quickly and cost-effectively fine-tune an open-source large language model for a specialized domain or task using your own dataset, especially when working within Kaggle's free GPU limits.
Not ideal if you need to train a large language model from scratch, require full model weights after training, or have extensive computational resources that would benefit from larger batch sizes and longer training runs.
Stars
8
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Abhijeet-ist/FineTunning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training