ChenWu98/agent-attack

[ICLR 2025] Dissecting adversarial robustness of multimodal language model agents

36
/ 100
Emerging

This project helps developers evaluate the security and reliability of multimodal AI agents, especially those interacting with web environments. It takes multimodal agent code and web-based task data, then applies crafted adversarial image attacks to measure how easily the agent can be misled or fail its tasks. Researchers and developers working on robust AI systems for web automation or general multimodal interaction would use this to stress-test their agents.

132 stars. No commits in the last 6 months.

Use this if you are a developer or researcher building and evaluating multimodal AI agents and need to understand their vulnerabilities to visual adversarial attacks in web-based scenarios.

Not ideal if you are an end-user of an AI agent and are not involved in its development, testing, or security analysis.

AI-security multimodal-AI agent-development web-automation AI-robustness
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

132

Forks

9

Language

Python

License

MIT

Last pushed

Feb 19, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ChenWu98/agent-attack"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.