bltlab/seqscore

SeqScore: Scoring for named entity recognition and other sequence labeling tasks

56
/ 100
Established

This tool helps researchers and practitioners evaluate how accurately their AI models identify specific entities within text, like names, organizations, or locations. You input two files: one with the correct, human-annotated labels for text sequences, and another with your model's predictions. It outputs detailed precision, recall, and F1 scores, showing how well your model performed on each entity type. It's ideal for anyone developing or assessing natural language processing systems, especially in academic or applied research settings.

Available on PyPI.

Use this if you need to rigorously measure the performance of a named entity recognition (NER) or sequence labeling model against a reference standard.

Not ideal if you are looking for a tool to train a new NER model or if your text labeling task isn't based on sequential tags.

natural-language-processing named-entity-recognition text-annotation model-evaluation computational-linguistics
Maintenance 10 / 25
Adoption 6 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

23

Forks

5

Language

Python

License

MIT

Last pushed

Feb 27, 2026

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/bltlab/seqscore"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.