LLMPID/LLMPID-AS
LLM Prompt Injection Detection API Service PoC.
This project helps businesses and organizations protect their Large Language Model (LLM) applications, like chatbots or document processors, from malicious inputs. It takes a user's prompt as input and classifies it as either a legitimate query or a prompt injection attempt, preventing harmful instructions from reaching your LLM. The primary users are software developers or system integrators who are building and deploying LLM-powered services.
Use this if you are a developer looking for an easy-to-integrate security layer to detect and block prompt injection attacks in your LLM-powered applications.
Not ideal if you are an end-user simply interacting with an LLM and not responsible for its underlying security infrastructure.
Stars
10
Forks
1
Language
Go
License
AGPL-3.0
Category
Last pushed
Nov 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/LLMPID/LLMPID-AS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...