kvmanohar22/img2imgGAN
Implementation of the paper : "Toward Multimodal Image-to-Image Translation"
This project helps graphic designers, digital artists, or anyone working with visual media to transform images from one style to another. You can input an image, like a simple line drawing, and receive multiple realistic interpretations, such as a handbag, shoe, or building facade. This is useful for rapid prototyping, generating diverse design options, or exploring creative variations without manual effort.
No commits in the last 6 months.
Use this if you need to automatically generate multiple, diverse visual outputs from a single input image, for tasks like concept art or digital asset creation.
Not ideal if you require precise control over every detail of the output image, as the generated images are creative interpretations rather than exact replicas.
Stars
57
Forks
17
Language
Python
License
MIT
Category
Last pushed
Feb 12, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/kvmanohar22/img2imgGAN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
yunjey/domain-transfer-network
TensorFlow Implementation of Unsupervised Cross-Domain Image Generation
taesungp/contrastive-unpaired-translation
Contrastive unpaired image-to-image translation, faster and lighter training than cyclegan (ECCV...
PaddlePaddle/PaddleGAN
PaddlePaddle GAN library, including lots of interesting applications like First-Order motion...
tohinz/ConSinGAN
PyTorch implementation of "Improved Techniques for Training Single-Image GANs" (WACV-21)
sagiebenaim/DistanceGAN
Pytorch implementation of "One-Sided Unsupervised Domain Mapping" NIPS 2017