LLMPID/LLMPID-AS

LLM Prompt Injection Detection API Service PoC.

34
/ 100
Emerging

This project helps businesses and organizations protect their Large Language Model (LLM) applications, like chatbots or document processors, from malicious inputs. It takes a user's prompt as input and classifies it as either a legitimate query or a prompt injection attempt, preventing harmful instructions from reaching your LLM. The primary users are software developers or system integrators who are building and deploying LLM-powered services.

Use this if you are a developer looking for an easy-to-integrate security layer to detect and block prompt injection attacks in your LLM-powered applications.

Not ideal if you are an end-user simply interacting with an LLM and not responsible for its underlying security infrastructure.

LLM security application security chatbot development AI ethics API integration
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Go

License

AGPL-3.0

Last pushed

Nov 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/LLMPID/LLMPID-AS"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.