anishrajpandey/Prompt_Injection_Detector
A lightweight web tool to detect prompt injection in AI inputs. Helps developers and researchers identify potentially harmful or manipulative prompts before they reach large language models, enhancing AI security and trust.
No commits in the last 6 months.
Stars
1
Forks
2
Language
HTML
License
—
Category
Last pushed
Jul 05, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/anishrajpandey/Prompt_Injection_Detector"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...