Gen-Verse/MMaDA

MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion, mixed-CoT, unified RL)

51
/ 100
Established

MMaDA is a powerful tool for professionals who need to generate both text and images from various inputs, or understand how different types of information relate. It takes in text, images, or a combination of both, and produces generated text, new images, or insights into multimodal content. This is designed for researchers, content creators, or AI practitioners working on advanced generative AI applications.

1,611 stars.

Use this if you need a single AI model to handle complex reasoning across text and images, and generate high-quality multimodal content.

Not ideal if you only need a simple text-to-image generator or a basic language model, as its advanced features might be overkill.

multimodal-AI generative-AI text-to-image AI-research content-creation
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 16 / 25

How are scores calculated?

Stars

1,611

Forks

87

Language

Python

License

MIT

Last pushed

Feb 14, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Gen-Verse/MMaDA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.