FareedKhan-dev/agentic-guardrails

Layered guardrails to make agentic AI safer and more reliable.

42
/ 100
Emerging

This project provides a comprehensive, layered system to enhance the safety and reliability of AI agents and RAG solutions. It takes potentially risky or non-compliant user prompts, agent plans, and outputs, applying multiple checks to filter out security risks, hallucinations, and compliance violations. This is invaluable for AI developers, ML engineers, and MLOps teams building and deploying AI applications in sensitive or regulated environments.

No commits in the last 6 months.

Use this if you are building an AI agent or RAG system and need robust, multi-stage protection against malicious inputs, unsafe actions, and inaccurate or non-compliant outputs.

Not ideal if your AI application has no access to sensitive data, operates in a low-risk environment, or does not involve autonomous decision-making where errors could have significant consequences.

AI Safety MLOps Agentic AI Development Compliance Automation Risk Mitigation
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 18 / 25

How are scores calculated?

Stars

34

Forks

12

Language

Jupyter Notebook

License

MIT

Last pushed

Oct 05, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/FareedKhan-dev/agentic-guardrails"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.