MaxwellYaoNi/PACE

[NeurIPS 2024 Spotlight] Official implementation for "PACE: marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization"

32
/ 100
Emerging

This project offers a way for machine learning researchers and practitioners to improve the generalization of large language models (LLMs) or other foundation models. It helps fine-tune these models more efficiently by taking pre-trained models and specialized datasets as input, resulting in models that perform better on new, unseen data without requiring extensive computational resources. Anyone working with adapting large models to specific tasks would find this valuable.

Use this if you are a machine learning researcher or practitioner looking to fine-tune large pre-trained models on new datasets and want to ensure they generalize well to unseen data, even with limited data or computational resources.

Not ideal if you are looking for a pre-built solution that doesn't require any understanding of model fine-tuning or machine learning concepts, as this is a research-oriented implementation.

machine-learning-research large-model-fine-tuning model-generalization parameter-efficient-learning artificial-intelligence
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

18

Forks

Language

Python

License

MIT

Last pushed

Feb 03, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MaxwellYaoNi/PACE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.