shrut2702/upasak
UI-based Fine-Tuning for Large Language Models (LLMs)
This tool helps AI researchers, data scientists, and machine learning engineers adapt large language models (LLMs) to specific tasks or datasets without writing code. You provide your existing text data, often in common formats like CSV or JSON, and it outputs a specialized LLM ready for your unique applications. It's designed for practitioners who need to fine-tune models while maintaining data privacy.
Available on PyPI.
Use this if you need to customize an LLM for a particular domain or instruction style and want an intuitive interface with built-in data privacy features.
Not ideal if you need to fine-tune non-text-based LLMs or prefer a purely code-driven, highly customizable deep learning workflow.
Stars
20
Forks
1
Language
Python
License
MIT
Category
Last pushed
Dec 04, 2025
Commits (30d)
0
Dependencies
14
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/shrut2702/upasak"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training