energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch

[ECCV 2022] Compositional Generation using Diffusion Models

43
/ 100
Emerging

This project helps graphic designers, artists, and 3D modelers create more precise and complex images and 3D objects using AI text-to-image models like Stable Diffusion or Point-E. You input multiple descriptive text prompts, and it generates an image or 3D mesh that combines or excludes elements based on your instructions. The result is a highly customized visual output that closely matches your creative vision.

485 stars. No commits in the last 6 months.

Use this if you need to generate images or 3D models with specific combinations of features or if you want to explicitly exclude certain elements using natural language prompts.

Not ideal if you prefer to generate visuals with a single, straightforward text prompt without needing fine-grained control over compositional elements.

generative-art 3d-modeling concept-design visual-creation digital-illustration
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

485

Forks

39

Language

Jupyter Notebook

License

Last pushed

Apr 24, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.