AIoT-MLSys-Lab/Famba-V
[ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion
Famba-V helps deep learning researchers and practitioners enhance the efficiency of training Vision Mamba (Vim) models. It takes your existing Vim model and training data, and outputs a Vim model that trains faster and uses less memory, while maintaining or improving accuracy. This is designed for those working with advanced computer vision models.
No commits in the last 6 months.
Use this if you are a deep learning researcher or practitioner experiencing high training times or memory usage with Vision Mamba models for image classification or other computer vision tasks.
Not ideal if you are not working with Vision Mamba or similar Transformer-alternative architectures, or if your primary goal is not training efficiency.
Stars
34
Forks
1
Language
Python
License
—
Category
Last pushed
Sep 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/AIoT-MLSys-Lab/Famba-V"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
col14m/cadrille
[ICLR2026] cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning
filaPro/cad-recode
[ICCV2025] CAD-Recode: Reverse Engineering CAD Code from Point Clouds
pengsongyou/openscene
[CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
worldbench/3EED
[NeurIPS 2025 DB Track] 3EED: Ground Everything Everywhere in 3D
cambrian-mllm/cambrian-s
Cambrian-S: Towards Spatial Supersensing in Video