haoningwu3639/StoryGen
[CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
This project helps create unique, sequential visual stories from a text description. You input a narrative or story idea, and it generates a series of images that visually tell that story, ensuring consistency across scenes. This tool is ideal for creatives, educators, or content creators who need to illustrate concepts or tales without drawing or taking photos.
264 stars. No commits in the last 6 months.
Use this if you need to transform a written story or concept into a sequence of cohesive and visually engaging images for presentations, digital books, or marketing materials.
Not ideal if you require precise control over every detail of the generated images or need to create a story based on existing video footage.
Stars
264
Forks
18
Language
Python
License
MIT
Category
Last pushed
Dec 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/haoningwu3639/StoryGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
ModelTC/LightX2V
Light Image Video Generation Inference Framework
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators