3B-Group/ConvRe

🤖ConvRe🤯: An Investigation of LLMs’ Inefficacy in Understanding Converse Relations (EMNLP 2023)

14
/ 100
Experimental

This project helps evaluate how well large language models (LLMs) understand relationships between concepts, especially when those relationships are expressed in a converse (opposite) way. It takes in structured facts (like "x has part y") and assesses an LLM's ability to interpret them correctly in both normal and converse forms, showing if the model truly grasps the meaning or just uses shortcuts. This tool is for AI researchers, natural language processing engineers, or anyone developing and evaluating LLMs.

No commits in the last 6 months.

Use this if you need to rigorously test the semantic understanding capabilities of a large language model, particularly its ability to discern between normal and converse relational meanings.

Not ideal if you are looking for a tool to train LLMs or apply them directly to a specific real-world text analysis task, as this is solely an evaluation benchmark.

LLM evaluation natural language understanding AI research semantic relation extraction knowledge graph analysis
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

Python

License

Last pushed

Oct 10, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/3B-Group/ConvRe"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.