WindVChen/DiffAttack

An unrestricted attack based on diffusion models that can achieve both good transferability and imperceptibility.

44
/ 100
Emerging

This tool helps security researchers and AI developers create 'adversarial attacks' — slightly altered images that fool AI models into misidentifying objects, while remaining imperceptible to humans. You provide an image, and it outputs a subtly modified version that can bypass image recognition systems without raising human suspicion. It's designed for those who need to test the robustness and vulnerabilities of AI vision systems.

259 stars.

Use this if you need to generate highly inconspicuous, yet effective, adversarial images to test the resilience of various image recognition AI models, including those with defensive measures.

Not ideal if your goal is to understand how diffusion models work or to generate creative, aesthetically pleasing images, as this tool is specifically for adversarial attacks.

AI-security computer-vision model-robustness adversarial-machine-learning image-recognition-auditing
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 12 / 25

How are scores calculated?

Stars

259

Forks

18

Language

Python

License

Apache-2.0

Last pushed

Nov 23, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/WindVChen/DiffAttack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.