faverogian/controlNet
An implementation of ControlNet as described in "Adding Conditional Control to Text-to-Image Diffusion Models" published by Zhang et al.
This project helps machine learning engineers and researchers fine-tune existing text-to-image diffusion models. You provide an existing diffusion model and a dataset of images paired with a controlling condition (like Canny edge maps), and it produces a new model capable of generating images that precisely follow that condition. This is for users who want to add specific spatial or structural control to their AI image generation workflows.
No commits in the last 6 months.
Use this if you need to train a diffusion model to generate images that adhere to specific structural inputs, like outlines or pose skeletons, beyond just text prompts.
Not ideal if you are looking for an end-user application to generate images directly, or if you don't have experience training deep learning models.
Stars
17
Forks
3
Language
Python
License
MIT
Category
Last pushed
Feb 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/faverogian/controlNet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
scraed/LanPaint
High quality training free inpaint for every stable diffusion model. Supports ComfyUI
julienkay/com.doji.diffusers
A Unity package to run pretrained diffusion models with Unity Sentis
apapiu/transformer_latent_diffusion
Text to Image Latent Diffusion using a Transformer core
Aatricks/LightDiffusion-Next
Fastest Diffusion backend, WebUI, server. Pushing implementation and discovery of optimizations...
FMXExpress/Stable-Diffusion-Desktop-Client
Stable Diffusion Desktop client for Windows, macOS, and Linux built in Embarcadero Delphi.