asahi417/relbert

The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-quality relation embedding based on language models.

43
/ 100
Emerging

RelBERT helps natural language processing (NLP) practitioners understand and compare the relationships between word pairs, such as "Paris-France" or "doctor-hospital." It takes two words as input and generates a numerical vector that represents their relationship. This vector can then be used to find other word pairs with similar relationships or to classify relationships. This tool is ideal for NLP researchers, data scientists, or computational linguists working with semantic relationships.

No commits in the last 6 months. Available on PyPI.

Use this if you need to quantitatively measure the semantic relationship between any two words and want to find other pairs that share the same kind of relationship.

Not ideal if your primary goal is to generate human-readable text or answer complex factual questions directly, as this tool focuses on relation embedding rather than language generation.

natural-language-processing computational-linguistics semantic-analysis information-retrieval analogy-solving
Stale 6m No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 25 / 25
Community 10 / 25

How are scores calculated?

Stars

46

Forks

5

Language

Python

License

MIT

Last pushed

Dec 02, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/asahi417/relbert"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.