M507/HackMeGPT
Vulnerable LLM Application
This tool provides a safe, interactive environment to understand how large language models (LLMs) can be exploited if not properly secured. You input various prompts and observe how the LLM responds, specifically looking for vulnerabilities like data leakage or unexpected behaviors. It's designed for cybersecurity professionals, penetration testers, and developers who need to evaluate the security posture of AI applications.
No commits in the last 6 months.
Use this if you need hands-on experience identifying security flaws in LLM-powered applications.
Not ideal if you are looking for a general-purpose AI assistant or a secure LLM application to use in production.
Stars
14
Forks
4
Language
Python
License
MIT
Category
Last pushed
Jan 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/M507/HackMeGPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...