deep-symbolic-mathematics/llm-srbench
[ICML2025 Oral] LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models
This project offers a standardized way for researchers and scientists to assess how well large language models can discover scientific equations. It provides a collection of 239 problems from various scientific fields, designed to test an LLM's ability to reason and find underlying mathematical relationships in data, rather than just recall memorized formulas. The input is scientific problem descriptions and associated data, and the output is an evaluation of the LLM's performance in uncovering the correct equations. Scientists, physicists, chemists, and biologists who use LLMs for modeling and discovery would use this.
No commits in the last 6 months.
Use this if you are a researcher or scientist evaluating how effectively different large language models can perform symbolic regression to uncover fundamental scientific equations from observational data.
Not ideal if you are looking for an off-the-shelf tool to directly discover equations for your own dataset without intending to benchmark LLM capabilities.
Stars
94
Forks
11
Language
Python
License
—
Category
Last pushed
Jul 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/deep-symbolic-mathematics/llm-srbench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
qcri/LLMeBench
Benchmarking Large Language Models
THUDM/LongBench
LongBench v2 and LongBench (ACL 25'&24')