automorphic-ai/aegis

Self-hardening firewall for large language models

33
/ 100
Emerging

This project helps protect large language models (LLMs) from harmful inputs and outputs, acting as a security layer for your AI applications. It takes user prompts and model responses, and flags potential threats like prompt injections, data leaks, or toxic language. The primary users are developers or operations teams responsible for deploying and managing LLM-powered applications, ensuring their safe and responsible operation.

267 stars. No commits in the last 6 months.

Use this if you are deploying a large language model and need to safeguard it and your users from various adversarial attacks and inappropriate content.

Not ideal if you are looking for a general-purpose content moderation tool or if your application does not involve large language models.

LLM security AI safety prompt injection prevention content moderation application security
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 7 / 25

How are scores calculated?

Stars

267

Forks

6

Language

Python

License

MIT

Last pushed

Feb 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/automorphic-ai/aegis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.