Abhijeet-ist/FineTunning

This is a short script based on fine tuning a open sourced LLM based on customized parameters and data.

21
/ 100
Experimental

This script helps machine learning practitioners adapt existing large language models (LLMs) to perform specific tasks using their own custom datasets. You provide an instruction-response JSON dataset and a pre-trained LLM, and it outputs a smaller, specialized version of that model (LoRA adapters) that's better at your particular task. It's designed for data scientists and researchers who want to customize LLMs without needing extensive computational resources.

Use this if you need to quickly and cost-effectively fine-tune an open-source large language model for a specialized domain or task using your own dataset, especially when working within Kaggle's free GPU limits.

Not ideal if you need to train a large language model from scratch, require full model weights after training, or have extensive computational resources that would benefit from larger batch sizes and longer training runs.

natural-language-processing machine-learning-engineering custom-model-training data-science AI-research
No License No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Jupyter Notebook

License

Last pushed

Feb 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Abhijeet-ist/FineTunning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.