future-agi/ai-evaluation

Evaluation Framework for all your AI related Workflows

54
/ 100
Established

This framework helps AI product managers and developers assess, monitor, and guard their Large Language Model (LLM) applications. It takes your LLM's outputs, context, and user inputs to produce scores and explanations across 50+ metrics like faithfulness, toxicity, and relevancy. You can use it to ensure your AI behaves as expected and adheres to safety standards.

Use this if you are building LLM applications and need a comprehensive way to evaluate their performance, ensure safety, and prevent issues like hallucinations or security vulnerabilities.

Not ideal if you are looking for a general-purpose machine learning evaluation tool beyond LLM-specific workflows.

LLM-operations AI-safety prompt-engineering chatbot-testing model-governance
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 15 / 25
Community 20 / 25

How are scores calculated?

Stars

84

Forks

29

Language

Python

License

GPL-3.0

Last pushed

Mar 09, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/future-agi/ai-evaluation"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.