nervaluate and NER-Evaluation
These are **competitors**—both implement identical SemEval'13-based full named-entity evaluation metrics (as opposed to token-level scoring), with nervaluate being the actively maintained choice given its substantial monthly download activity versus zero downloads for the alternative implementation.
About nervaluate
MantisAI/nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
When you're building systems that identify specific entities like people, organizations, or locations in text, it's crucial to accurately measure how well your system performs. This tool helps you evaluate your named entity recognition (NER) models by comparing your system's output against a set of known correct labels. It goes beyond simple word-by-word checks to tell you if the system got the whole entity right, partially right, or made a specific type of mistake. This is for anyone who needs to assess the quality of their text analysis systems, such as a computational linguist, data scientist, or researcher working with natural language processing.
About NER-Evaluation
davidsbatista/NER-Evaluation
An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity
This helps evaluate how accurately a Named Entity Recognition (NER) system identifies and categorizes specific entities like names, locations, or organizations in text. It takes a list of correct entities (the 'gold standard') and the entities identified by your system, then outputs a detailed breakdown of correct, incorrect, and missing identifications. Anyone working with text analysis or information extraction, such as data scientists, computational linguists, or NLP engineers, would use this to assess their NER models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work