Open-Prompt-Injection and PromptMe

Open-Prompt-Injection
53
Established
PromptMe
47
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 9/25
Maturity 15/25
Community 21/25
Stars: 406
Forks: 64
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 94
Forks: 34
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About Open-Prompt-Injection

liu00222/Open-Prompt-Injection

This repository provides a benchmark for prompt injection attacks and defenses in LLMs

This toolkit helps evaluate and implement defenses against 'prompt injection' attacks on applications built with large language models (LLMs). It takes an LLM, a target task (like sentiment analysis), and potential injected instructions, then measures how well the LLM resists or identifies these malicious prompts. Anyone building or managing LLM-powered applications who needs to ensure their AI models behave as intended, without being hijacked by unexpected user input, would use this.

LLM security AI application development prompt engineering model risk management cybersecurity

About PromptMe

R3dShad0w7/PromptMe

PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.

This project helps AI Security professionals identify and understand security flaws in large language model (LLM) applications. It provides 10 interactive challenges, based on real-world scenarios, where you actively discover and exploit vulnerabilities outlined in the OWASP LLM Top 10. You start with a vulnerable LLM application and learn to find its weaknesses, culminating in 'capturing a flag' for each challenge.

AI Security LLM Vulnerabilities Cybersecurity Training Application Security Penetration Testing

Scores updated daily from GitHub, PyPI, and npm data. How scores work