btaille/sincere

Code for "Let's Stop Incorrect Comparisons in End-to-end Relation Extraction!", EMNLP 2020

37
/ 100
Emerging

This project helps natural language processing researchers and practitioners standardize how they evaluate models that extract relationships between entities from text. It takes text datasets like CoNLL04 or ACE05, applies different model architectures and evaluation methods, and outputs metrics that show how well the models perform at identifying entities and their relationships. This is useful for researchers who need to compare their new relation extraction models fairly against existing ones.

No commits in the last 6 months.

Use this if you are a researcher or NLP engineer developing or evaluating end-to-end relation extraction models and need a consistent way to benchmark their performance.

Not ideal if you are a business user looking for a pre-trained, production-ready solution to extract relationships from your specific domain text.

natural-language-processing information-extraction relation-extraction model-evaluation text-mining
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

22

Forks

5

Language

Python

License

Apache-2.0

Last pushed

Jun 14, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/btaille/sincere"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.