AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection

Lakera Gandalf AI challenge's step by step walkthrough, showcasing real-world prompt injection techniques and LLM security insights.

27
/ 100
Experimental

This project provides a step-by-step guide to exploiting vulnerabilities in AI systems through prompt injection. It shows you how to craft specific text inputs to trick a Large Language Model (LLM) into revealing secret information, even when it's designed to protect it. Anyone building or managing AI applications would use this to understand and prevent such attacks.

No commits in the last 6 months.

Use this if you are responsible for the security of AI systems and need to understand how attackers can manipulate them using natural language.

Not ideal if you are looking for automated tools to patch AI vulnerabilities or a general guide to LLM development.

AI Security LLM Red Teaming Prompt Engineering Cybersecurity AI Risk Management
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 6 / 25

How are scores calculated?

Stars

13

Forks

1

Language

License

MIT

Last pushed

Apr 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.