L-YeZhu/BoundaryDiffusion

[NeurIPS2023] BoundaryDiffusion: A learning-free method for semantic control with Diffusion Models

29
/ 100
Experimental

This project offers a resource-friendly way to modify specific attributes in existing images, like changing a frown to a smile on a face or altering architectural features in a building. It takes an input image and, without needing extensive training, produces a new version of that image with the desired semantic edit. This is useful for graphic designers, content creators, or researchers who need to quickly and efficiently generate variations of images based on conceptual changes.

No commits in the last 6 months.

Use this if you need to semantically edit images (e.g., adjust facial expressions, alter building styles) using pre-trained diffusion models without complex fine-tuning or re-training.

Not ideal if you need to generate entirely new images from scratch or if your image editing task requires pixel-level precision rather than broad semantic changes.

image-editing graphic-design content-creation visual-asset-modification synthetic-media
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

40

Forks

2

Language

Python

License

MIT

Last pushed

Nov 01, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/L-YeZhu/BoundaryDiffusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.