hupe1980/aisploit

🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.

47
/ 100
Emerging

This tool helps red teams and penetration testers identify and exploit vulnerabilities in large language model (LLM) AI systems. It takes in various security testing scenarios and provides automated utilities to find weaknesses, ultimately outputting reports on potential exploits. Security professionals focused on AI solution assessment will find this useful for their testing workflows.

No commits in the last 6 months. Available on PyPI.

Use this if you need to systematically test the security of AI models, specifically large language models, to uncover vulnerabilities.

Not ideal if you are looking for a general-purpose security testing tool that is not specifically focused on AI systems or large language models.

red-teaming penetration-testing AI-security vulnerability-assessment LLM-exploitation
Stale 6m
Maintenance 0 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

26

Forks

5

Language

Python

License

MIT

Last pushed

May 16, 2024

Commits (30d)

0

Dependencies

26

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/hupe1980/aisploit"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.