MaxwellYaoNi/PACE
[NeurIPS 2024 Spotlight] Official implementation for "PACE: marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization"
This project offers a way for machine learning researchers and practitioners to improve the generalization of large language models (LLMs) or other foundation models. It helps fine-tune these models more efficiently by taking pre-trained models and specialized datasets as input, resulting in models that perform better on new, unseen data without requiring extensive computational resources. Anyone working with adapting large models to specific tasks would find this valuable.
Use this if you are a machine learning researcher or practitioner looking to fine-tune large pre-trained models on new datasets and want to ensure they generalize well to unseen data, even with limited data or computational resources.
Not ideal if you are looking for a pre-built solution that doesn't require any understanding of model fine-tuning or machine learning concepts, as this is a research-oriented implementation.
Stars
18
Forks
—
Language
Python
License
MIT
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MaxwellYaoNi/PACE"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training