sihyun-yu/REPA
[ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
This project helps machine learning researchers and practitioners efficiently train powerful image generation models. It takes large datasets of images, like ImageNet or MS-COCO, and produces state-of-the-art diffusion transformer models that can generate new, high-quality images or even create images from text descriptions. It's designed for those pushing the boundaries of generative AI.
1,582 stars. No commits in the last 6 months.
Use this if you need to train image generation models faster and achieve higher quality results than traditional methods, particularly for large-scale image datasets or text-to-image tasks.
Not ideal if you are looking for an out-of-the-box solution for casual image editing or don't have experience with deep learning model training.
Stars
1,582
Forks
81
Language
Python
License
MIT
Category
Last pushed
Mar 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sihyun-yu/REPA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...