THUNLP-MT/StableToolBench
A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.
This project offers a robust benchmark for evaluating how well AI models, particularly large language models (LLMs), can use external tools and APIs. It takes in an LLM and a set of real-world API descriptions, then provides a stable and realistic assessment of the LLM's ability to correctly call these tools and achieve desired outcomes. Developers and researchers working on improving LLM tool-use capabilities would find this useful.
220 stars. No commits in the last 6 months.
Use this if you are developing or researching large language models and need a reliable, consistent way to test their ability to interact with and utilize external APIs and tools.
Not ideal if you are an end-user looking for a ready-to-use application, as this is a development and research benchmark, not a user-facing tool.
Stars
220
Forks
22
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/THUNLP-MT/StableToolBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)