mdombrov-33/go-promptguard

LLM prompt injection detection for Go applications

35
/ 100
Emerging

This tool helps developers and security engineers protect their AI applications from prompt injection attacks. It takes user input, analyzes it for malicious patterns, and tells you if it's safe to send to a large language model (LLM). You can use this to prevent attackers from manipulating your LLM or revealing sensitive information.

Use this if you are building an AI-powered application in Go and need to secure your LLM from malicious user inputs.

Not ideal if your application does not use Go, or if you are not working with large language models.

AI-security LLM-security application-security prompt-engineering data-privacy
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Go

License

MIT

Last pushed

Jan 24, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/mdombrov-33/go-promptguard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.