Zierax/Basic-ML-prompt-injections
llm attacks basic payloads
This project helps security professionals and developers understand how attackers can manipulate large language models (LLMs) using various prompt injection techniques. It takes examples of crafted prompts as input and demonstrates how they can bypass security or extract information. The output shows the potential unauthorized access or altered model behavior, which is critical for building robust defenses.
No commits in the last 6 months.
Use this if you are responsible for securing LLM-powered applications and need to understand common attack vectors to protect against them.
Not ideal if you are looking for an automated penetration testing tool or a comprehensive security solution.
Stars
9
Forks
3
Language
—
License
MIT
Category
Last pushed
Apr 15, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Zierax/Basic-ML-prompt-injections"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ethz-spylab/agentdojo
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.
guardrails-ai/guardrails
Adding guardrails to large language models.
JasonLovesDoggo/caddy-defender
Caddy module to block or manipulate requests originating from AIs or cloud services trying to...
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language...
inkdust2021/VibeGuard
Uses just 1% memory while protecting 99% of your personal privacy.