sihyun-yu/REPA

[ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think

42
/ 100
Emerging

This project helps machine learning researchers and practitioners efficiently train powerful image generation models. It takes large datasets of images, like ImageNet or MS-COCO, and produces state-of-the-art diffusion transformer models that can generate new, high-quality images or even create images from text descriptions. It's designed for those pushing the boundaries of generative AI.

1,582 stars. No commits in the last 6 months.

Use this if you need to train image generation models faster and achieve higher quality results than traditional methods, particularly for large-scale image datasets or text-to-image tasks.

Not ideal if you are looking for an out-of-the-box solution for casual image editing or don't have experience with deep learning model training.

generative-ai image-synthesis deep-learning-research model-training diffusion-models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

1,582

Forks

81

Language

Python

License

MIT

Last pushed

Mar 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sihyun-yu/REPA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.