kyegomez/MultiModalMamba

A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest AI framework ever.

49
/ 100
Emerging

This project offers an advanced AI model that can understand and process both text and images simultaneously, like a person who can read and see at the same time. You can feed it a combination of written information and visual data, and it will generate an integrated interpretation. It's designed for data scientists and AI researchers who need to build systems that make sense of different types of information together.

465 stars.

Use this if you are building an AI application that needs to understand context from both text and images, such as for content analysis or advanced search engines.

Not ideal if your task only involves a single data type (like just text or just images) or if you need a simpler, less customizable model.

AI-development multi-modal-learning image-text-analysis machine-learning-engineering
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

465

Forks

25

Language

Python

License

MIT

Last pushed

Feb 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/MultiModalMamba"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.