arekusandr/last_layer

Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️

33
/ 100
Emerging

This project helps protect your AI applications from harmful inputs and outputs. It takes in user prompts or LLM responses and flags content that could lead to security risks or inappropriate behavior. Developers building applications with large language models will find this useful for maintaining a secure and safe user experience.

125 stars. No commits in the last 6 months.

Use this if you are developing an application that uses large language models and need a fast, privacy-focused way to detect and prevent prompt injection, jailbreaks, and other exploits.

Not ideal if you need a fully open-source solution where you can inspect and modify the core detection logic.

AI security LLM application development cybersecurity prompt engineering data privacy
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

125

Forks

4

Language

Python

License

MIT

Last pushed

Jul 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/arekusandr/last_layer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.