agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards
This project helps AI application developers, machine learning engineers, and researchers assess and improve the quality of their AI agents or chatbots. It takes in test data and the AI application's responses, then provides objective quality scores and detailed feedback. The primary users are those building and refining AI-powered products.
459 stars.
Use this if you need a reliable way to systematically evaluate your AI application's performance, from general text quality to agent-specific behaviors like tool use, and want to easily integrate this into your development workflow for continuous optimization.
Not ideal if you are looking for a general-purpose analytics tool or a platform for evaluating traditional software applications, as its focus is specifically on AI application quality and optimization.
Stars
459
Forks
37
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/agentscope-ai/OpenJudge"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
RouteWorks/RouterArena
RouterArena: An open framework for evaluating LLM routers with standardized datasets, metrics,...