hupe1980/aisploit
🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.
This tool helps red teams and penetration testers identify and exploit vulnerabilities in large language model (LLM) AI systems. It takes in various security testing scenarios and provides automated utilities to find weaknesses, ultimately outputting reports on potential exploits. Security professionals focused on AI solution assessment will find this useful for their testing workflows.
No commits in the last 6 months. Available on PyPI.
Use this if you need to systematically test the security of AI models, specifically large language models, to uncover vulnerabilities.
Not ideal if you are looking for a general-purpose security testing tool that is not specifically focused on AI systems or large language models.
Stars
26
Forks
5
Language
Python
License
MIT
Category
Last pushed
May 16, 2024
Commits (30d)
0
Dependencies
26
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/hupe1980/aisploit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
format81/TI-Mindmap-GPT
AI-powered tool designed to help producing Threat Intelligence Mindmap.
bobby-tablez/TTP-Threat-Feeds
Threat feeds designed to extract adversarial TTPs and IOCs, using: ✨AI✨
KryptSec/oasis
Open-source AI security benchmarking CLI. Measure how AI models perform offensive security tasks...
ethiack/ai4eh
AI for Ethical Hacking - Workshop
amazon-science/Cyber-Zero
Cyber-Zero: Training Cybersecurity Agents Without Runtime