WindVChen/DiffAttack
An unrestricted attack based on diffusion models that can achieve both good transferability and imperceptibility.
This tool helps security researchers and AI developers create 'adversarial attacks' — slightly altered images that fool AI models into misidentifying objects, while remaining imperceptible to humans. You provide an image, and it outputs a subtly modified version that can bypass image recognition systems without raising human suspicion. It's designed for those who need to test the robustness and vulnerabilities of AI vision systems.
259 stars.
Use this if you need to generate highly inconspicuous, yet effective, adversarial images to test the resilience of various image recognition AI models, including those with defensive measures.
Not ideal if your goal is to understand how diffusion models work or to generate creative, aesthetically pleasing images, as this tool is specifically for adversarial attacks.
Stars
259
Forks
18
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/WindVChen/DiffAttack"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OPTML-Group/Unlearn-Saliency
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in...
Shilin-LU/VINE
[ICLR 2025] "Robust Watermarking Using Generative Priors Against Image Editing: From...
koninik/DiffusionPen
Official PyTorch Implementation of "DiffusionPen: Towards Controlling the Style of Handwritten...
Wuyxin/DISC
(ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation
bytedance/LatentUnfold
Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training