AIoT-MLSys-Lab/Famba-V

[ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion

18
/ 100
Experimental

Famba-V helps deep learning researchers and practitioners enhance the efficiency of training Vision Mamba (Vim) models. It takes your existing Vim model and training data, and outputs a Vim model that trains faster and uses less memory, while maintaining or improving accuracy. This is designed for those working with advanced computer vision models.

No commits in the last 6 months.

Use this if you are a deep learning researcher or practitioner experiencing high training times or memory usage with Vision Mamba models for image classification or other computer vision tasks.

Not ideal if you are not working with Vision Mamba or similar Transformer-alternative architectures, or if your primary goal is not training efficiency.

deep-learning computer-vision model-training neural-networks machine-learning-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 3 / 25

How are scores calculated?

Stars

34

Forks

1

Language

Python

License

Last pushed

Sep 30, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/AIoT-MLSys-Lab/Famba-V"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.