ZiyiZhang27/MVC-ZigAL
[CVPR 2026] Code for the paper "Refining Few-Step Text-to-Multiview Diffusion via Reinforcement Learning"
This project helps 3D artists, game developers, or designers create multiple views of a 3D scene from a simple text description. You provide a text prompt (e.g., "a frog wearing a sweater") and it generates several consistent images of that object from different angles. This tool is for professionals who need high-quality, consistent multi-view images quickly for their projects.
Use this if you need to rapidly generate realistic and consistent sets of images of an object or scene from various viewpoints, based solely on a text description.
Not ideal if you need to generate single, standalone images or if you require precise control over individual camera angles and object poses rather than just a general multiview output.
Stars
9
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ZiyiZhang27/MVC-ZigAL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...