faramesh/faramesh-core

faramesh-core

41
/ 100
Emerging

This tool helps AI engineers and operations teams control the actions of their AI agents and large language models (LLMs). It takes your AI agent's planned actions and a set of human-defined rules, then decides if the agent is allowed to proceed, needs human approval, or should be blocked entirely. The result is a secure and auditable record of all agent decisions, ensuring compliance and preventing unintended actions.

Use this if you need to enforce strict, deterministic rules and obtain auditable proof for the actions performed by your AI agents, especially in sensitive or regulated environments.

Not ideal if you are looking for a system that uses AI to monitor or interpret the behavior of other AI, or if you don't require pre-execution control over agent actions.

AI-governance AI-safety agent-operations compliance-auditing responsible-AI
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 11 / 25
Community 15 / 25

How are scores calculated?

Stars

14

Forks

4

Language

Go

License

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/faramesh/faramesh-core"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.