faverogian/controlNet

An implementation of ControlNet as described in "Adding Conditional Control to Text-to-Image Diffusion Models" published by Zhang et al.

35
/ 100
Emerging

This project helps machine learning engineers and researchers fine-tune existing text-to-image diffusion models. You provide an existing diffusion model and a dataset of images paired with a controlling condition (like Canny edge maps), and it produces a new model capable of generating images that precisely follow that condition. This is for users who want to add specific spatial or structural control to their AI image generation workflows.

No commits in the last 6 months.

Use this if you need to train a diffusion model to generate images that adhere to specific structural inputs, like outlines or pose skeletons, beyond just text prompts.

Not ideal if you are looking for an end-user application to generate images directly, or if you don't have experience training deep learning models.

AI-image-generation deep-learning-research model-fine-tuning computer-vision generative-AI
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

17

Forks

3

Language

Python

License

MIT

Last pushed

Feb 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/faverogian/controlNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.