aimagelab/TransFusion

Official codebase of "Update Your Transformer to the Latest Release: Re-Basin of Task Vectors" - ICML 2025

15
/ 100
Experimental

This tool helps machine learning researchers and practitioners effectively combine or transfer knowledge between different versions of vision models, specifically CLIP and Vision Transformer (ViT) architectures. It takes pre-trained and fine-tuned models as input and outputs a merged model or transferred task vectors, enabling robust evaluation across various image datasets. This is designed for those working with advanced computer vision models.

No commits in the last 6 months.

Use this if you need to integrate knowledge from different pre-trained vision models or transfer specific learned tasks between them, even if their underlying structures aren't perfectly aligned.

Not ideal if you are looking for a simple, off-the-shelf solution for general image classification without needing to merge or transfer knowledge between complex transformer models.

computer-vision deep-learning-research model-merging transfer-learning vision-transformers
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 7 / 25
Community 0 / 25

How are scores calculated?

Stars

23

Forks

Language

Python

License

Last pushed

Jul 30, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/aimagelab/TransFusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.