btaille/sincere
Code for "Let's Stop Incorrect Comparisons in End-to-end Relation Extraction!", EMNLP 2020
This project helps natural language processing researchers and practitioners standardize how they evaluate models that extract relationships between entities from text. It takes text datasets like CoNLL04 or ACE05, applies different model architectures and evaluation methods, and outputs metrics that show how well the models perform at identifying entities and their relationships. This is useful for researchers who need to compare their new relation extraction models fairly against existing ones.
No commits in the last 6 months.
Use this if you are a researcher or NLP engineer developing or evaluating end-to-end relation extraction models and need a consistent way to benchmark their performance.
Not ideal if you are a business user looking for a pre-trained, production-ready solution to extract relationships from your specific domain text.
Stars
22
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 14, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/btaille/sincere"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
davidsbatista/BREDS
"Bootstrapping Relationship Extractors with Distributional Semantics" (Batista et al., 2015) in...
davidsbatista/Snowball
Implementation with some extensions of the paper "Snowball: Extracting Relations from Large...
nicolay-r/AREkit
Document level Attitude and Relation Extraction toolkit (AREkit) for sampling and processing...
plkmo/BERT-Relation-Extraction
PyTorch implementation for "Matching the Blanks: Distributional Similarity for Relation Learning" paper
thunlp/FewRel
A Large-Scale Few-Shot Relation Extraction Dataset