dylan-slack/Tablet
The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.
This project helps researchers evaluate how well large language models (LLMs) can make predictions using structured, tabular data when given specific instructions and limited examples. It provides a collection of real-world tabular datasets, each paired with detailed task instructions. Researchers can input these datasets and instructions into their LLMs to measure and compare the models' accuracy and efficiency in various prediction scenarios.
No commits in the last 6 months.
Use this if you are a machine learning researcher developing or benchmarking large language models for tabular data prediction and want to understand how instructions can improve their performance, especially with limited training data.
Not ideal if you are an end-user looking for a ready-to-use predictive model or an application to directly solve a business problem with tabular data.
Stars
25
Forks
4
Language
Python
License
—
Category
Last pushed
Apr 28, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/dylan-slack/Tablet"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems