Mxbonn/ltmp

Code for Learned Thresholds Token Merging and Pruning for Vision Transformers (LTMP). A technique to reduce the size of Vision Transformers to any desired size with minimal loss of accuracy.

19
/ 100
Experimental

This project helps machine learning engineers and researchers optimize Vision Transformers for practical deployment. It takes an existing Vision Transformer model and outputs a smaller, more computationally efficient version with minimal impact on accuracy. This is ideal for those developing and deploying computer vision applications where performance and resource usage are critical.

No commits in the last 6 months.

Use this if you need to reduce the computational cost and size of your Vision Transformer models for deployment, especially in resource-constrained environments.

Not ideal if your primary concern is developing novel Vision Transformer architectures rather than optimizing existing ones.

computer-vision model-optimization deep-learning-deployment image-classification machine-learning-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 5 / 25

How are scores calculated?

Stars

17

Forks

1

Language

Python

License

Last pushed

Nov 24, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Mxbonn/ltmp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.