arthur-ai/arthur-engine

Make AI work for Everyone - Monitoring and governing for your AI/ML

48
/ 100
Emerging

This helps you ensure your AI and machine learning models are working as expected and generating appropriate outputs. You input your model's predictions and actual results (or LLM responses), and it provides evaluations on performance metrics like accuracy, drift, and toxicity, along with insights into potential issues. It's for data scientists, ML engineers, or product managers who need to monitor and govern the quality and safety of their AI applications.

Use this if you need to thoroughly evaluate, compare, and set up real-time safety guardrails for your machine learning models or generative AI applications.

Not ideal if you're looking for a simple, one-off model evaluation tool without continuous monitoring or guardrail needs.

AI-governance ML-monitoring LLM-evaluation model-validation data-security
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

69

Forks

9

Language

Python

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/arthur-ai/arthur-engine"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.