forcesunseen/llm-hackers-handbook
A guide to LLM hacking: fundamentals, prompt injection, offense, and defense
This handbook helps you understand and defend against common weaknesses in large language models (LLMs). It takes complex technical vulnerabilities and explains them with practical examples, showing you how malicious actors might try to exploit your AI systems. It's for anyone building, deploying, or securing applications powered by LLMs who needs to understand potential risks.
188 stars. No commits in the last 6 months.
Use this if you are developing or managing AI applications and need to proactively identify and mitigate security risks specific to large language models.
Not ideal if you are looking for a general guide to developing LLM applications or a deep academic dive into theoretical AI security.
Stars
188
Forks
24
Language
—
License
—
Category
Last pushed
Apr 14, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/forcesunseen/llm-hackers-handbook"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...