THUNLP-MT/StableToolBench

A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.

42
/ 100
Emerging

This project offers a robust benchmark for evaluating how well AI models, particularly large language models (LLMs), can use external tools and APIs. It takes in an LLM and a set of real-world API descriptions, then provides a stable and realistic assessment of the LLM's ability to correctly call these tools and achieve desired outcomes. Developers and researchers working on improving LLM tool-use capabilities would find this useful.

220 stars. No commits in the last 6 months.

Use this if you are developing or researching large language models and need a reliable, consistent way to test their ability to interact with and utilize external APIs and tools.

Not ideal if you are an end-user looking for a ready-to-use application, as this is a development and research benchmark, not a user-facing tool.

AI-model-evaluation LLM-development tool-integration API-interaction AI-research
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

220

Forks

22

Language

Python

License

Apache-2.0

Last pushed

Apr 15, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/THUNLP-MT/StableToolBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.