UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
This tool helps creative professionals like animators, content creators, or marketers transform long textual narratives into sequences of images. You provide a story broken down into individual text prompts (like scene descriptions), and it generates a series of coherent images or a GIF. This is ideal for anyone needing to visualize stories of up to 100 frames with consistent characters and fine-grained interactions.
949 stars. Actively maintained with 6 commits in the last 30 days.
Use this if you need to create a visual representation of a long story from text, ensuring characters and scenes remain consistent across many frames.
Not ideal if you're looking for a simple, single-image generation from a short prompt or if you're not comfortable with command-line interfaces.
Stars
949
Forks
129
Language
Python
License
MIT
Category
Last pushed
Feb 18, 2026
Commits (30d)
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/UCSC-VLAA/story-iter"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...
zai-org/ImageReward
[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation