gohyojun15/ANT_diffusion

[Neurips 2023] Official pytorch implementation of "Addressing Negative Transfer in Diffusion Models"

26
/ 100
Experimental

This project helps researchers and developers who are building or training advanced AI models that generate images. It specifically addresses a challenge where training on multiple related tasks can sometimes make the model perform worse than expected. By providing methods to manage how different training signals are weighted, it ensures that your image generation models learn more effectively and produce higher-quality images. You provide existing image datasets and configurations, and the output is a more robust, finely-tuned image generation model.

No commits in the last 6 months.

Use this if you are training sophisticated image generation models (like diffusion models) and want to improve their performance and stability, especially when working with complex datasets or fine-tuning for specific tasks.

Not ideal if you are looking for an off-the-shelf tool to simply generate images without getting into the technical details of model training and optimization.

AI model training image generation deep learning research computer vision generative AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

23

Forks

1

Language

Python

License

MIT

Last pushed

Jul 04, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/gohyojun15/ANT_diffusion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.