mdombrov-33/go-promptguard
LLM prompt injection detection for Go applications
This tool helps developers and security engineers protect their AI applications from prompt injection attacks. It takes user input, analyzes it for malicious patterns, and tells you if it's safe to send to a large language model (LLM). You can use this to prevent attackers from manipulating your LLM or revealing sensitive information.
Use this if you are building an AI-powered application in Go and need to secure your LLM from malicious user inputs.
Not ideal if your application does not use Go, or if you are not working with large language models.
Stars
10
Forks
1
Language
Go
License
MIT
Category
Last pushed
Jan 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/mdombrov-33/go-promptguard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language...
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts...