THU-KEG/COPEN

The official code and dataset for EMNLP 2022 paper "COPEN: Probing Conceptual Knowledge in Pre-trained Language Models".

26
/ 100
Experimental

This project provides a benchmark to evaluate how well Pre-trained Language Models (PLMs) understand concepts, not just words. It takes a PLM and a set of conceptual tasks (like judging similarity or properties) as input. The output helps researchers understand if their PLM can grasp human-like conceptual knowledge. This is for AI researchers or language model developers who want to analyze and improve their models' cognitive abilities.

No commits in the last 6 months.

Use this if you are an AI researcher or developer working on Pre-trained Language Models and need a standardized way to test their conceptual understanding, beyond basic linguistic tasks.

Not ideal if you are looking for a tool to directly apply language models for downstream applications like text generation, summarization, or translation.

AI research language model evaluation natural language understanding cognitive AI computational linguistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 4 / 25

How are scores calculated?

Stars

21

Forks

1

Language

Python

License

MIT

Last pushed

Mar 09, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/THU-KEG/COPEN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.