nanlliu/Unsupervised-Compositional-Concepts-Discovery
[ICCV 2023] Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models
This project helps artists, designers, and researchers understand and reuse visual concepts embedded within image collections. You input a set of diverse images, like different art styles, objects, or scene elements, and it outputs a set of 'generative concepts' that define those images. These concepts can then be recombined to create new, unique images or used to classify existing ones.
No commits in the last 6 months.
Use this if you need to automatically identify and disentangle recurring visual styles, objects, or scene components from a large collection of images without manual labeling.
Not ideal if you already have labeled data and specific image generation prompts, or if you need to perform traditional supervised image classification.
Stars
85
Forks
3
Language
Python
License
—
Category
Last pushed
Oct 17, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nanlliu/Unsupervised-Compositional-Concepts-Discovery"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...