mihirp1998/AlignProp

AlignProp uses direct reward backpropogation for the alignment of large-scale text-to-image diffusion models. Our method is 25x more sample and compute efficient than reinforcement learning methods (PPO) for finetuning Stable Diffusion

35
/ 100
Emerging

This project helps AI developers fine-tune large text-to-image models like Stable Diffusion to produce images that better align with specific goals, such as improved aesthetic quality, semantic accuracy, or object controllability. Developers input an existing text-to-image model and a reward function, and the output is a more finely tuned model that generates images optimized for the desired criteria. It's designed for machine learning engineers and researchers working on generative AI applications.

314 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer who needs to efficiently improve the quality and control of images generated by large text-to-image diffusion models for specific downstream tasks.

Not ideal if you are an end-user simply looking to generate images without needing to train or fine-tune models yourself.

AI-model-finetuning generative-AI image-synthesis diffusion-models machine-learning-engineering
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

314

Forks

11

Language

Python

License

MIT

Last pushed

Nov 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/mihirp1998/AlignProp"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.