quantifylabs/aegis-memory

Secure context engineering for AI agents. Content security · integrity verification · trust hierarchy · ACE patterns. Self-hosted, Apache 2.0.

53
/ 100
Established

This project helps operations engineers and security teams build AI agents that are protected against common vulnerabilities like data leaks and content manipulation. It takes the information your agents use and produce, ensuring its integrity and security, so you can confidently deploy AI agents in sensitive environments. It's designed for anyone deploying AI agents in a production setting where security and data protection are paramount.

Available on PyPI.

Use this if you are building or managing AI agents in a production environment and need to ensure the security, integrity, and trustworthiness of the information they process and share.

Not ideal if you are developing a basic AI agent for personal use or a non-critical application where advanced security features for context are not a primary concern.

AI-security agent-operations data-protection AI-governance production-AI
Maintenance 10 / 25
Adoption 6 / 25
Maturity 22 / 25
Community 15 / 25

How are scores calculated?

Stars

19

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Mar 02, 2026

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/quantifylabs/aegis-memory"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.