Ha0Tang/SelectionGAN

[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation

36
/ 100
Emerging

This project helps researchers and computer vision scientists generate new images from existing ones by guiding the translation with specific semantic information. You input a source image and a semantic map indicating desired features, and it outputs a new image that incorporates those guided changes. This is useful for individuals working on image synthesis, scene generation, or creating varied visual data for research.

468 stars. No commits in the last 6 months.

Use this if you need to precisely control the features and composition of a generated image based on a reference image and a semantic guide.

Not ideal if you're looking for an unguided image-to-image translation, or if you don't have semantic maps to direct the generation process.

image synthesis computer vision research scene generation image manipulation visual data creation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 18 / 25

How are scores calculated?

Stars

468

Forks

59

Language

Python

License

Last pushed

Feb 18, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Ha0Tang/SelectionGAN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.