yc015/scene-representation-diffusion-model

Linear probe found representations of scene attributes in a text-to-image diffusion model

38
/ 100
Emerging

This project helps researchers and artists understand and manipulate how text-to-image models create scenes. By adjusting the model's internal representation of elements like foreground objects, you can guide it to generate a series of images that show an object moving, without needing to retrain the model. This is useful for anyone exploring the capabilities of generative AI for creative content or studying model behavior.

No commits in the last 6 months.

Use this if you want to create short video clips with moving foreground objects from a single text prompt using an existing text-to-image model, or if you're a researcher exploring how these models represent and control scene attributes.

Not ideal if you're looking for a simple, out-of-the-box tool for general video generation or if you want to fine-tune a model for specific styles.

generative-art AI-research creative-content-creation image-manipulation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

35

Forks

6

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 11, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/yc015/scene-representation-diffusion-model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.