target-benchmark/target
TARGET is a benchmark for evaluating Table Retrieval for Generative Tasks such as Fact Verification and Text-to-SQL
This project helps evaluate how well different table retrieval systems perform when asked to find relevant tables for answering questions or generating code. You input a question and a collection of tables, and it outputs scores indicating how accurately the system retrieved the correct tables and generated the right answer. This is useful for researchers and data scientists working on advanced AI systems that interact with tabular data.
No commits in the last 6 months.
Use this if you are developing or comparing different methods for finding specific tables within large datasets to answer natural language questions or convert text to SQL.
Not ideal if you are looking for a tool to directly perform table retrieval or answer generation in a production environment, as this is a benchmark for evaluating such systems.
Stars
28
Forks
11
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Jul 14, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/target-benchmark/target"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
denser-org/denser-retriever
An enterprise-grade AI retriever designed to streamline AI integration into your applications,...
rayliuca/T-Ragx
Enhancing Translation with RAG-Powered Large Language Models
neuml/rag
🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with...
NovaSearch-Team/RAG-Retrieval
Unify Efficient Fine-tuning of RAG Retrieval, including Embedding, ColBERT, ReRanker.
RulinShao/retrieval-scaling
Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".