Open-Prompt-Injection and PromptMe
About Open-Prompt-Injection
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
This toolkit helps evaluate and implement defenses against 'prompt injection' attacks on applications built with large language models (LLMs). It takes an LLM, a target task (like sentiment analysis), and potential injected instructions, then measures how well the LLM resists or identifies these malicious prompts. Anyone building or managing LLM-powered applications who needs to ensure their AI models behave as intended, without being hijacked by unexpected user input, would use this.
About PromptMe
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
This project helps AI Security professionals identify and understand security flaws in large language model (LLM) applications. It provides 10 interactive challenges, based on real-world scenarios, where you actively discover and exploit vulnerabilities outlined in the OWASP LLM Top 10. You start with a vulnerable LLM application and learn to find its weaknesses, culminating in 'capturing a flag' for each challenge.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work