quantifylabs/aegis-memory
Secure context engineering for AI agents. Content security · integrity verification · trust hierarchy · ACE patterns. Self-hosted, Apache 2.0.
This project helps operations engineers and security teams build AI agents that are protected against common vulnerabilities like data leaks and content manipulation. It takes the information your agents use and produce, ensuring its integrity and security, so you can confidently deploy AI agents in sensitive environments. It's designed for anyone deploying AI agents in a production setting where security and data protection are paramount.
Available on PyPI.
Use this if you are building or managing AI agents in a production environment and need to ensure the security, integrity, and trustworthiness of the information they process and share.
Not ideal if you are developing a basic AI agent for personal use or a non-critical application where advanced security features for context are not a primary concern.
Stars
19
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 02, 2026
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/quantifylabs/aegis-memory"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related agents
kahalewai/dual-auth
Dual-Auth provides AGBAC dual-subject Authorization for AI Agents and Humans using existing IAM...
The-17/agentsecrets
Zero-knowledge secrets infrastructure built for AI agents to operate, not just consume.
stephnangue/warden
An identity-aware egress gateway that replaces cloud credentials with zero-trust access,...
onecli/onecli
Open-source credential vault, give your AI agents access to services without exposing keys.
PunkGo/punkgo-jack
AI tool hook adapter for punkgo-kernel — every tool call gets a cryptographic receipt