arekusandr/last_layer
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
This project helps protect your AI applications from harmful inputs and outputs. It takes in user prompts or LLM responses and flags content that could lead to security risks or inappropriate behavior. Developers building applications with large language models will find this useful for maintaining a secure and safe user experience.
125 stars. No commits in the last 6 months.
Use this if you are developing an application that uses large language models and need a fast, privacy-focused way to detect and prevent prompt injection, jailbreaks, and other exploits.
Not ideal if you need a fully open-source solution where you can inspect and modify the core detection logic.
Stars
125
Forks
4
Language
Python
License
MIT
Category
Last pushed
Jul 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/arekusandr/last_layer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
utkusen/promptmap
a security scanner for custom LLM applications
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...