TrustAI-laboratory/LLM-Security-CTF

Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

32
/ 100
Emerging

This project offers an interactive learning game for cybersecurity professionals to understand vulnerabilities in AI large language models (LLMs). Through a series of 'capture the flag' (CTF) challenges, users input prompts into a vulnerable LLM and learn to identify and exploit security flaws like prompt injection. It's designed for security researchers and practitioners responsible for securing AI-powered applications.

No commits in the last 6 months.

Use this if you are a cybersecurity professional needing hands-on experience to identify and mitigate security risks in applications powered by large language models.

Not ideal if you are looking for a theoretical overview of AI security or tools to directly secure your LLM applications without practical, challenge-based learning.

AI-security cybersecurity-training LLM-vulnerabilities security-auditing penetration-testing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

License

MIT

Last pushed

Aug 19, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/TrustAI-laboratory/LLM-Security-CTF"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.