bltlab/seqscore
SeqScore: Scoring for named entity recognition and other sequence labeling tasks
This tool helps researchers and practitioners evaluate how accurately their AI models identify specific entities within text, like names, organizations, or locations. You input two files: one with the correct, human-annotated labels for text sequences, and another with your model's predictions. It outputs detailed precision, recall, and F1 scores, showing how well your model performed on each entity type. It's ideal for anyone developing or assessing natural language processing systems, especially in academic or applied research settings.
Available on PyPI.
Use this if you need to rigorously measure the performance of a named entity recognition (NER) or sequence labeling model against a reference standard.
Not ideal if you are looking for a tool to train a new NER model or if your text labeling task isn't based on sequential tags.
Stars
23
Forks
5
Language
Python
License
MIT
Category
Last pushed
Feb 27, 2026
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/bltlab/seqscore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
MantisAI/nervaluate
Full named-entity (i.e., not tag/token) evaluation metrics based on SemEval’13
dice-group/gerbil
GERBIL - General Entity annotatoR Benchmark
syuoni/eznlp
Easy Natural Language Processing
LHNCBC/metamaplite
A near real-time named-entity recognizer
OpenJarbas/simple_NER
simple rule based named entity recognition