BenjaminJonghyun/SuperStyleNet

SuperStyleNet: Deep Image Synthesis with Superpixel Based Style Encoder (BMVC 2021)

33
/ 100
Emerging

This project helps create highly detailed, realistic images by transferring specific visual styles from one image to another, focusing on preserving small-scale object details. You provide a content image and one or more style images, along with their semantic masks (e.g., outlining hair, skin, or buildings). The output is a new image where the content's structure is retained, but its appearance adopts the nuanced styles from the sources. This tool is for researchers and practitioners in computer vision and digital art who need precise style transfer.

No commits in the last 6 months.

Use this if you need to generate high-quality images with precise control over style transfer, especially when preserving fine details and local stylistic elements across different parts of an image is crucial.

Not ideal if you are looking for a simple, out-of-the-box style transfer application without needing to prepare semantic masks or manage deep learning model training.

image synthesis style transfer computer vision research generative art digital content creation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

27

Forks

3

Language

Python

License

MIT

Category

gan-based-t2i

Last pushed

Dec 28, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/BenjaminJonghyun/SuperStyleNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.