rohansx/wardn

credential isolation for AI agents. Agents never see real API keys - structural guarantee, not policy.

39
/ 100
Emerging

This project helps anyone using AI agents or large language models (LLMs) to protect their sensitive API keys and credentials. It takes your real API keys as input, encrypts them, and provides useless placeholder tokens to your AI agents. This prevents your actual keys from being exposed in agent memory, logs, or LLM context windows, even if an agent is compromised.

Use this if you are building or running AI agents and need a robust way to prevent them from ever directly accessing or leaking your valuable API keys for services like OpenAI or Anthropic.

Not ideal if you are not working with AI agents or LLMs, or if your primary concern is traditional server-side credential management for non-AI applications.

AI agent security LLM security credential management data privacy API key protection
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 7 / 25

How are scores calculated?

Stars

26

Forks

2

Language

Rust

License

MIT

Last pushed

Mar 26, 2026

Monthly downloads

22

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/rohansx/wardn"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.