muzishen/RCDMs
[AAAI 2025] 🎬RCDMs🎬: Boosting Consistency in Story Visualization with Rich-Contextual Conditional Diffusion Models. RCDMs improve story generation with strong semantic and temporal consistency, integrating rich contextual conditions and enabling one-pass inference for enhanced coherence.
This project helps creators like game developers or comic artists generate consistent visual stories. You provide a series of text descriptions (captions) and possibly some reference images for an initial scene. It then outputs a sequence of images that depict the narrative, maintaining both semantic meaning and visual style across all frames. This tool is for anyone who needs to quickly visualize a multi-scene narrative while ensuring visual coherence.
120 stars. No commits in the last 6 months.
Use this if you need to create a visual sequence from text descriptions where maintaining consistent characters, styles, and environments across multiple frames is crucial.
Not ideal if you only need to generate single, standalone images from text, or if you require extremely precise control over every minute detail of each frame.
Stars
120
Forks
3
Language
Python
License
—
Category
Last pushed
Sep 30, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/muzishen/RCDMs"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...