roli-lpci/little-canary

Sacrificial LLM instances as behavioral probes for prompt injection detection

34
/ 100
Emerging

This project helps protect your AI applications, like chatbots or intelligent agents, from malicious instructions known as 'prompt injections.' It examines incoming user queries to see if they're trying to trick your AI into doing unintended things. The system takes in user input, analyzes it, and then tells your AI whether the input is safe, potentially harmful (with a warning), or should be blocked entirely, allowing you to build more secure AI experiences.

Available on PyPI.

Use this if you run an AI application or agent and need a lightweight, pre-check system to detect prompt injection attempts before they reach your main AI model.

Not ideal if you require formal security guarantees, audited benchmark comparability, or cannot accept that inputs will pass through unscreened if the security system is temporarily unavailable.

AI-security chatbot-protection agent-safety LLM-deployment application-security
Maintenance 10 / 25
Adoption 4 / 25
Maturity 20 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Apache-2.0

Last pushed

Mar 09, 2026

Commits (30d)

0

Dependencies

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/roli-lpci/little-canary"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.