CVL-UESTC/PerLDiff
ICCV 2025-PerLDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models
This project helps automotive designers, urban planners, and virtual environment creators generate realistic street view images. By providing text descriptions and layout specifications (like road maps), you can create customized synthetic street scenes. The output is highly detailed, perspective-accurate images of urban environments, useful for simulations or visual presentations.
Use this if you need to generate high-quality, controllable synthetic street view images for research, design, or simulation purposes, especially when precise control over scene layout and perspective is critical.
Not ideal if you're looking to generate abstract images or scenes outside of urban street environments, or if you don't require fine-grained control over layout and perspective.
Stars
53
Forks
3
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jan 05, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/CVL-UESTC/PerLDiff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...