ZouShilong1024/CycleDiff

Code for TIP2026 paper: CycleDiff: Cycle Diffusion Models for Unpaired Image-to-image Translation

38
/ 100
Emerging

This project helps researchers and artists transform images from one category to another without needing matching pairs of images. For example, it can turn a photo of a cat into a dog, or a daytime scene into a nighttime one. It takes a collection of source images and a collection of target images and learns to translate between them, outputting new images that look like they belong to the target category. This is useful for anyone working with image generation, style transfer, or synthetic data creation.

Use this if you need to translate images between two different visual domains where you don't have perfectly matched examples for training.

Not ideal if you need to perform precise pixel-level edits or require an exact one-to-one mapping between input and output images.

image-generation style-transfer computer-vision digital-art synthetic-data
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 4 / 25

How are scores calculated?

Stars

77

Forks

2

Language

Python

License

MIT

Last pushed

Feb 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ZouShilong1024/CycleDiff"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.