Chongjie-Si/Subspace-Tuning
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
This framework helps machine learning researchers and practitioners efficiently adapt large, pre-trained AI models for specific tasks without needing to retrain the entire model. It takes an existing large language or image generation model and, with minimal modifications, produces a specialized model capable of tasks like natural language understanding, question answering, or subject-driven image generation. It's for those working with large models who need to fine-tune them for diverse applications while saving computational resources.
177 stars.
Use this if you are a machine learning researcher or engineer looking to fine-tune large pre-trained models for specific tasks like NLU, NLG, or image generation, and want to do so efficiently without modifying all the model's parameters.
Not ideal if you are looking for a complete, out-of-the-box solution for end-users, or if you need to train models from scratch rather than adapt existing large ones.
Stars
177
Forks
10
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 29, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Chongjie-Si/Subspace-Tuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training