pdparchitect/llm-hacking-database
This repository contains various attack against Large Language Models.
This database helps security researchers and penetration testers understand and replicate attacks against large language models (LLMs). It takes descriptions of various 'jailbreaking' techniques as input and provides concrete examples of how to execute them, revealing vulnerabilities in LLM-powered applications. Security analysts and red teamers who are responsible for evaluating the safety of AI systems would use this.
132 stars. No commits in the last 6 months.
Use this if you need to identify and demonstrate security flaws or unintended behaviors in AI chatbots and LLMs.
Not ideal if you are looking for defensive programming strategies or code to directly implement LLM security patches.
Stars
132
Forks
11
Language
—
License
—
Category
Last pushed
May 21, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/pdparchitect/llm-hacking-database"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OWASP/www-project-top-10-for-large-language-model-applications
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
esbmc/esbmc-ai
Automated Code Repair suite powered by ESBMC and LLMs.
cla7aye15I4nd/PatchAgent
[USENIX Security 25] PatchAgent is a LLM-based practical program repair agent that mimics human...
iSEngLab/AwesomeLLM4APR
[TOSEM 2026]A Systematic Literature Review on Large Language Models for Automated Program Repair
YerbaPage/MGDebugger
Multi-Granularity LLM Debugger [ICSE2026]