davidsbatista/NER-Evaluation
An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity
This helps evaluate how accurately a Named Entity Recognition (NER) system identifies and categorizes specific entities like names, locations, or organizations in text. It takes a list of correct entities (the 'gold standard') and the entities identified by your system, then outputs a detailed breakdown of correct, incorrect, and missing identifications. Anyone working with text analysis or information extraction, such as data scientists, computational linguists, or NLP engineers, would use this to assess their NER models.
222 stars. No commits in the last 6 months.
Use this if you need to thoroughly evaluate the performance of your Named Entity Recognition model beyond simple token-level matching, considering partial matches and incorrect entity types.
Not ideal if you only need a basic, quick evaluation of individual token tagging accuracy rather than a comprehensive entity-level assessment.
Stars
222
Forks
48
Language
Python
License
MIT
Category
Last pushed
Jul 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/davidsbatista/NER-Evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
MantisAI/nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
dice-group/gerbil
GERBIL - General Entity annotatoR Benchmark
bltlab/seqscore
SeqScore: Scoring for named entity recognition and other sequence labeling tasks
syuoni/eznlp
Easy Natural Language Processing
LHNCBC/metamaplite
A near real-time named-entity recognizer