RISE-UNIBAS/humanities_data_benchmark

LLM Benchmark Suite for Humanities Data

50
/ 100
Established

This suite helps digital humanities researchers and practitioners evaluate how well different AI models perform on tasks involving historical documents and visual materials. You provide the benchmark datasets (images and text files) and prompts, and it produces quantifiable comparisons and performance scores for various AI models. This is for academics, archivists, librarians, and other specialists working with historical or cultural datasets.

Use this if you need to make evidence-based decisions about which AI model is most effective and cost-efficient for specific digital humanities tasks like transcribing historical scripts or extracting metadata from archival documents.

Not ideal if you are looking for pre-computed benchmark results without wanting to run or configure your own evaluations.

digital-humanities historical-research archival-science cultural-heritage text-recognition
No Package No Dependents
Maintenance 13 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

14

Forks

6

Language

Python

License

GPL-3.0

Last pushed

Mar 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/RISE-UNIBAS/humanities_data_benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.