claw-eval/claw-eval
Claw-Eval is an evaluation harness for evaluating LLM as agents. All tasks verified by humans.
This project helps AI researchers and developers reliably test and compare how well large language models (LLMs) perform complex, real-world tasks. It takes an LLM as input, puts it through various challenges in a sandboxed environment, and then provides a transparent score indicating its performance, verified through human checks. It's designed for anyone building or evaluating AI agents that need to act autonomously.
Use this if you are developing or selecting an AI agent and need an objective, reproducible way to measure its ability to handle practical tasks, from web searches to content creation.
Not ideal if you are looking for a simple API to integrate an LLM into your application without needing to rigorously benchmark its agentic capabilities.
Stars
68
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/claw-eval/claw-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents