kyegomez/VLM-Mamba

We introduce VLM-Mamba, the first Vision-Language Model built entirely on State Space Models (SSMs), specifically leveraging the Mamba architecture.

32
/ 100
Emerging

This project offers a new way for AI developers to build Vision-Language Models (VLMs) that understand both images and text. It takes images and text tokens as input and produces integrated vision-language outputs, allowing for tasks like image captioning or visual question answering. AI researchers and machine learning engineers looking to develop more efficient multi-modal AI systems would use this.

Use this if you are an AI developer looking to build vision-language models with significantly reduced memory footprint and faster inference speeds compared to traditional Transformer-based models.

Not ideal if you are a practitioner looking for a ready-to-use, pre-trained VLM for end-user applications without needing to engage in model development.

AI-development multi-modal-AI machine-learning-engineering vision-language-modeling
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

MIT

Last pushed

Jan 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/VLM-Mamba"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.