bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
This tool helps you customize and improve existing AI models that generate images, videos, and audio. You input your specific data (images, video clips, or sound files) and the tool fine-tunes a chosen AI model to better understand and generate content aligned with your unique needs. Creative professionals, researchers, and content creators looking to personalize generative AI models would find this useful.
2,782 stars. Actively maintained with 64 commits in the last 30 days. Available on PyPI.
Use this if you need to adapt powerful image, video, or audio generation AI models to produce highly specific content for your projects, even with limited GPU resources or large datasets.
Not ideal if you are looking to build a generative AI model from scratch rather than fine-tuning an existing one.
Stars
2,782
Forks
275
Language
Python
License
AGPL-3.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
64
Dependencies
56
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/bghira/SimpleTuner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
TheDesignFounder/DreamLayer
Benchmark diffusion models faster. Automate evals, seeds, and metrics for reproducible results.
AUTOMATIC1111/stable-diffusion-webui
Stable Diffusion web UI