Gen-Verse/MMaDA
MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion, mixed-CoT, unified RL)
MMaDA is a powerful tool for professionals who need to generate both text and images from various inputs, or understand how different types of information relate. It takes in text, images, or a combination of both, and produces generated text, new images, or insights into multimodal content. This is designed for researchers, content creators, or AI practitioners working on advanced generative AI applications.
1,611 stars.
Use this if you need a single AI model to handle complex reasoning across text and images, and generate high-quality multimodal content.
Not ideal if you only need a simple text-to-image generator or a basic language model, as its advanced features might be overkill.
Stars
1,611
Forks
87
Language
Python
License
MIT
Category
Last pushed
Feb 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Gen-Verse/MMaDA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
FlorianFuerrutter/genQC
Generative Quantum Circuits
horseee/DeepCache
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
kuleshov-group/mdlm
[NeurIPS 2024] Simple and Effective Masked Diffusion Language Model
Shark-NLP/DiffuSeq
[ICLR'23] DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
jeongwhanchoi/SCONE
"SCONE: A Novel Stochastic Sampling to Generate Contrastive Views and Hard Negative Samples for...