jantrienes/nereval
Evaluation script for named entity recognition (NER) systems based on entity-level F1 score.
This tool helps you assess how well your automated system identifies and categorizes specific entities within text, like product names or dates. You input a list of the entities your system found and compare it against a 'ground truth' list of what should have been found. The output is a clear F1-score, indicating the accuracy of your system's entity recognition. It's ideal for data scientists, NLP researchers, or anyone building systems that automatically extract structured information from unstructured text.
No commits in the last 6 months. Available on PyPI.
Use this if you need to objectively measure the performance of a Named Entity Recognition (NER) model, evaluating both the correctness of the entity type and its exact boundaries in text.
Not ideal if you are evaluating a general text classification system or a simple keyword extraction tool, as it's specifically designed for granular entity-level assessment.
Stars
69
Forks
8
Language
Python
License
MIT
Category
Last pushed
Apr 20, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/jantrienes/nereval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
MantisAI/nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
dice-group/gerbil
GERBIL - General Entity annotatoR Benchmark
bltlab/seqscore
SeqScore: Scoring for named entity recognition and other sequence labeling tasks
syuoni/eznlp
Easy Natural Language Processing
LHNCBC/metamaplite
A near real-time named-entity recognizer