rohansx/wardn
credential isolation for AI agents. Agents never see real API keys - structural guarantee, not policy.
This project helps anyone using AI agents or large language models (LLMs) to protect their sensitive API keys and credentials. It takes your real API keys as input, encrypts them, and provides useless placeholder tokens to your AI agents. This prevents your actual keys from being exposed in agent memory, logs, or LLM context windows, even if an agent is compromised.
Use this if you are building or running AI agents and need a robust way to prevent them from ever directly accessing or leaking your valuable API keys for services like OpenAI or Anthropic.
Not ideal if you are not working with AI agents or LLMs, or if your primary concern is traditional server-side credential management for non-AI applications.
Stars
26
Forks
2
Language
Rust
License
MIT
Category
Last pushed
Mar 26, 2026
Monthly downloads
22
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/rohansx/wardn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quantifylabs/aegis-memory
Secure context engineering for AI agents. Content security · integrity verification · trust...
kahalewai/dual-auth
Dual-Auth provides AGBAC dual-subject Authorization for AI Agents and Humans using existing IAM...
The-17/agentsecrets
Zero-knowledge secrets infrastructure built for AI agents to operate, not just consume.
stephnangue/warden
An identity-aware egress gateway that replaces cloud credentials with zero-trust access,...
onecli/onecli
Open-source credential vault, give your AI agents access to services without exposing keys.