aryanbhosale/sh-guard

Semantic shell command safety classifier — AST-based risk scoring for AI coding agents

35
/ 100
Emerging

This tool helps developers who use AI coding agents to prevent dangerous shell commands from being executed accidentally. It takes a shell command proposed by an AI agent and outputs a risk score and explanation, indicating whether the command is safe, a caution, or critical. Developers can use this to automatically block or flag high-risk commands before they cause harm to their systems or data.

Use this if you use AI coding agents like Claude Code, Codex, or Cursor and want an automatic safety net to review and block potentially harmful shell commands they generate.

Not ideal if you manually write all your shell commands and do not use AI coding assistants that generate and execute code on your behalf.

AI-developer-tools developer-security code-generation-safety shell-scripting automation-risk-management
No Package No Dependents
Maintenance 13 / 25
Adoption 4 / 25
Maturity 9 / 25
Community 9 / 25

How are scores calculated?

Stars

7

Forks

1

Language

Rust

License

GPL-3.0

Category

coding-agent

Last pushed

Apr 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/aryanbhosale/sh-guard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.