adobe-research/sam_inversion

[CVPR 2022] GAN inversion and editing with spatially-adaptive multiple latent layers

36
/ 100
Emerging

This project helps graphic designers, digital artists, and researchers manipulate images generated by StyleGAN2 models. You input an existing image (like a car, face, or cat) and it outputs an 'inverted' version that closely matches your original image while also being highly editable. This allows for precise control over specific regions of the image, making it easier to modify details like a car's wheels or a person's hairstyle.

174 stars. No commits in the last 6 months.

Use this if you need to take a real-world image and transform it into a format that can be easily edited using the powerful generative capabilities of StyleGAN2, allowing for detailed modifications to specific parts of the image.

Not ideal if you primarily work with image generation from scratch or only need broad, high-level edits to an image without region-specific control.

generative-art image-editing digital-content-creation creative-imaging visual-effects
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

174

Forks

10

Language

Python

License

Last pushed

Jan 21, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/adobe-research/sam_inversion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.