seojoonkim/prompt-guard

Advanced prompt injection defense system for AI agents. Multi-language detection, severity scoring, and security auditing.

50
/ 100
Established

This project helps protect your AI agents and large language model (LLM) applications from being manipulated or leaking sensitive information. It takes user input or AI-generated responses and identifies attempts to bypass safety rules or extract confidential data like API keys. Security engineers, AI product managers, or anyone deploying an AI assistant would use this to ensure their AI behaves as intended and doesn't reveal secrets.

122 stars.

Use this if you are deploying AI agents or LLM-powered systems and need to prevent prompt injection attacks, detect data leakage, or ensure your AI adheres to its designed purpose across multiple languages.

Not ideal if you are looking for a general-purpose content moderation tool for user-generated text unrelated to AI agent security.

AI Security Prompt Engineering Data Loss Prevention (DLP) LLM Operations Cybersecurity
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 11 / 25
Community 19 / 25

How are scores calculated?

Stars

122

Forks

23

Language

Python

License

MIT

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/seojoonkim/prompt-guard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.