gohyojun15/ANT_diffusion
[Neurips 2023] Official pytorch implementation of "Addressing Negative Transfer in Diffusion Models"
This project helps researchers and developers who are building or training advanced AI models that generate images. It specifically addresses a challenge where training on multiple related tasks can sometimes make the model perform worse than expected. By providing methods to manage how different training signals are weighted, it ensures that your image generation models learn more effectively and produce higher-quality images. You provide existing image datasets and configurations, and the output is a more robust, finely-tuned image generation model.
No commits in the last 6 months.
Use this if you are training sophisticated image generation models (like diffusion models) and want to improve their performance and stability, especially when working with complex datasets or fine-tuning for specific tasks.
Not ideal if you are looking for an off-the-shelf tool to simply generate images without getting into the technical details of model training and optimization.
Stars
23
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jul 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/gohyojun15/ANT_diffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
xie-lab-ml/Golden-Noise-for-Diffusion-Models
[ICCV2025] The code of our work "Golden Noise for Diffusion Models: A Learning Framework".
yulewang97/ERDiff
[NeurIPS 2023 Spotlight] Official Repo for "Extraction and Recovery of Dpatio-temporal Structure...
UNIC-Lab/RadioDiff
This is the code for the paper "RadioDiff: An Effective Generative Diffusion Model for...
pantheon5100/pid_diffusion
This repository is the official implementation of the paper: Physics Informed Distillation for...
zju-pi/diff-sampler
An open-source toolbox for fast sampling of diffusion models. Official implementations of our...