PromptScreen and malicious-prompt-detection
About PromptScreen
dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with multiple detection methods, CLI tool, and FastAPI integration.
This tool helps safeguard large language model (LLM) applications by detecting and preventing malicious prompts that try to bypass safety measures or inject harmful instructions. It takes user input prompts as an input and determines if they are safe or if they constitute an attack, such as prompt injection or jailbreaking. This is for developers building and deploying LLM-powered applications who need to ensure the security and integrity of their AI systems.
About malicious-prompt-detection
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging supervised machine learning classifiers.
This project helps developers and engineers building applications powered by large language models (LLMs) to identify and block malicious prompts. It takes user input prompts and classifies them as either 'benign' or 'malicious' to prevent prompt injection attacks. This is for engineers and developers responsible for the security and robustness of their LLM-based applications.
Scores updated daily from GitHub, PyPI, and npm data. How scores work