adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
This tool helps designers, marketers, and artists create unique images by teaching an AI model about new objects, styles, or concepts from just a few example pictures. You provide 4-20 images of something new, like a specific product or an artistic style, and then you can generate new images incorporating that concept using text prompts. It's designed for anyone who needs to quickly generate custom visual content featuring specific items or aesthetics.
1,971 stars.
Use this if you need to generate images of specific, new concepts (like your unique product or a custom art style) that aren't typically found in standard text-to-image AI models.
Not ideal if you primarily need to generate generic images or don't have a specific concept (object, style) in mind that requires custom training.
Stars
1,971
Forks
142
Language
Python
License
—
Category
Last pushed
Dec 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/adobe-research/custom-diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...
zai-org/ImageReward
[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation