CVL-UESTC/PerLDiff

ICCV 2025-PerLDiff: Controllable Street View Synthesis Using Perspective-Layout Diffusion Models

37
/ 100
Emerging

This project helps automotive designers, urban planners, and virtual environment creators generate realistic street view images. By providing text descriptions and layout specifications (like road maps), you can create customized synthetic street scenes. The output is highly detailed, perspective-accurate images of urban environments, useful for simulations or visual presentations.

Use this if you need to generate high-quality, controllable synthetic street view images for research, design, or simulation purposes, especially when precise control over scene layout and perspective is critical.

Not ideal if you're looking to generate abstract images or scenes outside of urban street environments, or if you don't require fine-grained control over layout and perspective.

autonomous-driving urban-planning virtual-reality-environments automotive-design simulation-data-generation
No Package No Dependents
Maintenance 6 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

53

Forks

3

Language

Jupyter Notebook

License

MIT

Last pushed

Jan 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/CVL-UESTC/PerLDiff"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.