future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
This framework helps AI product managers and developers assess, monitor, and guard their Large Language Model (LLM) applications. It takes your LLM's outputs, context, and user inputs to produce scores and explanations across 50+ metrics like faithfulness, toxicity, and relevancy. You can use it to ensure your AI behaves as expected and adheres to safety standards.
Use this if you are building LLM applications and need a comprehensive way to evaluate their performance, ensure safety, and prevent issues like hallucinations or security vulnerabilities.
Not ideal if you are looking for a general-purpose machine learning evaluation tool beyond LLM-specific workflows.
Stars
84
Forks
29
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/future-agi/ai-evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Related agents
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards
dreadnode/AIRTBench-Code
Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models