xie-lab-ml/CoRe2
[TPAMI] The official implementation of our paper "CoRe^2: Collect, Reflect and Refine to Generate Better and Faster".
This project helps graphic designers, digital artists, and marketers create high-quality images from text descriptions more efficiently. You provide a text prompt describing your desired image, and it generates a corresponding image using advanced diffusion models like Stable Diffusion XL or 3.5. This is ideal for anyone needing to quickly visualize concepts or produce unique imagery without traditional artistic skills.
Use this if you need to generate detailed and visually appealing images from text prompts and want to enhance the quality and speed of popular image generation models.
Not ideal if you need to train entirely new image generation models from scratch or require fine-grained control over every pixel during the image creation process.
Stars
31
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/xie-lab-ml/CoRe2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...