gkiril/benchie
Comprehensive evaluation framework for Open Information Extraction.
This tool helps researchers and developers evaluate the performance of Open Information Extraction (OIE) systems. It takes manually annotated 'gold standard' extractions and extractions from various OIE systems as input. It then outputs precision, recall, and F1 scores, allowing users to compare how well different OIE systems extract factual information from text. This is designed for natural language processing (NLP) researchers and engineers developing or utilizing OIE technologies.
No commits in the last 6 months.
Use this if you need to objectively measure and compare the accuracy of different Open Information Extraction (OIE) models on your textual data.
Not ideal if you are looking for an OIE system itself rather than a tool to evaluate existing ones, or if your evaluation needs go beyond precision, recall, and F1 scores.
Stars
40
Forks
10
Language
Python
License
—
Category
Last pushed
Jun 21, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/gkiril/benchie"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
luheng/deep_srl
Code and pre-trained model for: Deep Semantic Role Labeling: What Works and What's Next
sileod/tasksource
Datasets collection and preprocessings framework for NLP extreme multitask learning
loomchild/maligna
Bilingual sengence aligner
CK-Explorer/DuoSubs
Semantic subtitle aligner and merger for bilingual subtitle syncing.
coastalcph/lex-glue
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English