divagr18/SecureShell

Plug-and-play terminal security layer for LLM agents. Drop-in gatekeeper that prevents dangerous shell commands. Works with OpenAI, Claude, Gemini & more.

47
/ 100
Emerging

This project helps developers safely integrate Large Language Model (LLM) agents with command-line access. It acts as a "zero-trust" gatekeeper, evaluating every shell command an LLM agent tries to execute to prevent dangerous or inappropriate actions. The system takes potential commands from an LLM agent and outputs either the command's successful execution or a detailed reason for its blocking. It is for software developers and AI engineers building applications that give LLM agents the ability to interact with operating system commands.

Available on PyPI.

Use this if you are developing an LLM agent that needs to run shell commands and you want to ensure those commands are safe, platform-compatible, and well-reasoned.

Not ideal if you are an end-user without programming knowledge, as this is a developer tool requiring integration into an existing codebase.

AI-safety LLM-agent-development software-development DevOps-automation AI-engineering
Maintenance 10 / 25
Adoption 6 / 25
Maturity 20 / 25
Community 11 / 25

How are scores calculated?

Stars

22

Forks

3

Language

Python

License

MIT

Category

coding-agent

Last pushed

Jan 29, 2026

Commits (30d)

0

Dependencies

8

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/divagr18/SecureShell"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.