pavanvamsi3/prompt-cop
A light weight library prompt-cop scans text files in your project for potential prompt injection vulnerabilities.
Prompt Cop helps software developers proactively identify and prevent prompt injection vulnerabilities in their projects. It scans text files within a project for malicious inputs that could trick AI models. This tool takes source code and other text files as input and outputs a list of potential security flaws, helping developers create more secure AI-powered applications.
No commits in the last 6 months. Available on npm.
Use this if you are a software developer building applications that interact with large language models and need to ensure your prompts are secure against malicious attacks.
Not ideal if you are a casual user of AI tools and not involved in the development or deployment of AI-powered applications.
Stars
8
Forks
—
Language
JavaScript
License
MIT
Category
Last pushed
Sep 19, 2025
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/pavanvamsi3/prompt-cop"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
connectaman/LoPace
LoPace is a bi-directional encoding framework designed to reduce the storage footprint of...
LakshmiN5/promptqc
ESLint for your system prompts — catch contradictions, anti-patterns, injection vulnerabilities,...
roli-lpci/lintlang
Static linter for AI agent tool descriptions, system prompts, and configs. Catches vague...
sbsaga/toon
TOON — Laravel AI package for compact, human-readable, token-efficient data format with JSON ⇄...
nooscraft/tokuin
CLI tool – estimates LLM tokens/costs and runs provider-aware load tests for OpenAI, Anthropic,...