TreeAI-Lab/NumericBench

A comprehensive benchmark to evaluate and improve the fundamental numerical reasoning abilities of large language models using diverse synthetic and real-world datasets.

17
/ 100
Experimental

This tool helps AI researchers and developers systematically test how well large language models (LLMs) handle numbers and numerical tasks. It takes an LLM and a variety of numerical datasets (like stock trends or weather patterns) as input, then outputs a detailed evaluation of the LLM's arithmetic, number recognition, comparison, and logical reasoning abilities. It's designed for anyone building or deploying LLMs who needs to ensure their models are reliable with numerical data.

No commits in the last 6 months.

Use this if you are developing or evaluating large language models and need a rigorous way to measure their fundamental numerical reasoning capabilities across diverse real-world and synthetic data.

Not ideal if you are looking for a tool to solve specific numerical problems or perform data analysis directly, as this is a benchmark for assessing AI models.

AI model evaluation natural language processing machine learning research quantitative reasoning LLM development
No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

29

Forks

Language

License

Last pushed

Jun 21, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/TreeAI-Lab/NumericBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.