giannisdaras/ambient-omni
[NeurIPS 2025, Spotlight]: Ambient-o: Training Good models with Bad Data.
This project helps image generators create better quality and more diverse images without needing perfectly curated datasets. It takes existing image datasets, even those with lower quality or out-of-distribution content, and uses them to train more robust generative models. Image generation artists, content creators, and researchers in visual AI can use this to produce high-quality images from text descriptions or as part of larger image synthesis workflows.
Use this if you need to train image generation models efficiently using readily available, imperfect datasets, or if you want to increase the diversity and quality of your generated images without complex data cleaning.
Not ideal if your primary goal is perfect pixel-level accuracy for highly specialized applications where data quality is already strictly controlled.
Stars
31
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/giannisdaras/ambient-omni"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
soran-ghaderi/torchebm
🍓 Build and train energy-based and diffusion models in PyTorch ⚡.
spqb/adabmDCApy
Pytorch implementation of adabmDCA
opendilab/GenerativeRL
Python library for solving reinforcement learning (RL) problems using generative models (e.g....
AstraZeneca/DiffAbXL
The official implementation of DiffAbXL benchmarked in the paper "Exploring Log-Likelihood...
G-U-N/UniRL
a unified reinforcement learning toolbox for joint RL on language models and diffusion models