shisa-ai/shisad
Security-first AI agent daemon — the model proposes actions, the runtime decides what execute
This project helps operations engineers and IT security teams safely integrate AI agents into their critical workflows. It acts as a secure intermediary, taking proposed actions from an AI model and applying strict security policies before any action is executed on external systems like files, networks, or messaging channels. The outcome is the confident use of AI agents for high-impact tasks without fear of accidental or malicious misuse.
Available on PyPI.
Use this if you need to deploy AI agents that access sensitive data, interact with untrusted inputs, or perform consequential actions, and you require robust security and auditability.
Not ideal if you are looking for a simple, plug-and-play AI agent for low-risk, non-critical tasks without strict security or auditing requirements.
Stars
9
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 04, 2026
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/shisa-ai/shisad"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ghostwright/phantom
An AI co-worker with its own computer. Self-evolving, persistent memory, MCP server, secure...
dograh-hq/dograh
Open Source Voice Agent Platform
gmickel/flow-next
Plan-first AI workflow plugin for Claude Code, OpenAI Codex, and Factory Droid. Zero-dep task...
joseairosa/recall
Persistent cross-session memory for Claude & AI agents. Self-host on Redis/Valkey, or use the...
lintsinghua/claude-code-book
《御舆:解码 Agent Harness》42万字拆解 AI Agent 的Harness骨架与神经 —— Claude Code 架构深度剖析,15 章从对话循环到构建你自己的 Agent...