mit-han-lab/offsite-tuning

Offsite-Tuning: Transfer Learning without Full Model

42
/ 100
Emerging

This helps organizations adapt powerful, pre-trained AI models to their specific data and tasks without sharing their sensitive information or needing massive computing resources. A model owner sends a small adapter and a compressed emulator to a data owner, who then uses their specific data to improve the model's performance. The fine-tuned adapter is returned to the model owner, resulting in an AI model that works better for the data owner's unique needs while keeping all data private. This is for AI practitioners, data scientists, or business units who want to customize large foundation models.

387 stars. No commits in the last 6 months.

Use this if you need to customize a large, proprietary AI model with your specific data, but you cannot or do not want to share your data with the model owner, and you have limited computational resources for fine-tuning.

Not ideal if you have full access to the AI model's internal workings and weights, or if you prefer to perform traditional fine-tuning directly on your own infrastructure without privacy constraints.

AI model customization data privacy machine learning foundation model adaptation secure AI deployment
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

387

Forks

40

Language

Python

License

MIT

Last pushed

Nov 27, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mit-han-lab/offsite-tuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.