asahi417/relbert
The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-quality relation embedding based on language models.
RelBERT helps natural language processing (NLP) practitioners understand and compare the relationships between word pairs, such as "Paris-France" or "doctor-hospital." It takes two words as input and generates a numerical vector that represents their relationship. This vector can then be used to find other word pairs with similar relationships or to classify relationships. This tool is ideal for NLP researchers, data scientists, or computational linguists working with semantic relationships.
No commits in the last 6 months. Available on PyPI.
Use this if you need to quantitatively measure the semantic relationship between any two words and want to find other pairs that share the same kind of relationship.
Not ideal if your primary goal is to generate human-readable text or answer complex factual questions directly, as this tool focuses on relation embedding rather than language generation.
Stars
46
Forks
5
Language
Python
License
MIT
Category
Last pushed
Dec 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/asahi417/relbert"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
davidsbatista/BREDS
"Bootstrapping Relationship Extractors with Distributional Semantics" (Batista et al., 2015) in...
davidsbatista/Snowball
Implementation with some extensions of the paper "Snowball: Extracting Relations from Large...
nicolay-r/AREkit
Document level Attitude and Relation Extraction toolkit (AREkit) for sampling and processing...
plkmo/BERT-Relation-Extraction
PyTorch implementation for "Matching the Blanks: Distributional Similarity for Relation Learning" paper
thunlp/FewRel
A Large-Scale Few-Shot Relation Extraction Dataset