CheongWoong/impact_of_cooccurrence

A repository for analyzing the impact of co-occurrence statistics on factual knowledge of large language models (EMNLP 2023 Findings).

21
/ 100
Experimental

This project helps researchers understand how word co-occurrence in large text datasets affects the factual knowledge stored in large language models. It takes pre-training data and knowledge probing datasets as input to calculate statistics like term document index and co-occurrence matrices. The output includes analysis results and baselines for probing how well language models capture factual information. It is designed for AI researchers and NLP scientists studying language model behavior and knowledge representation.

No commits in the last 6 months.

Use this if you are an AI researcher investigating the mechanisms by which large language models acquire and represent factual knowledge from their training data, particularly focusing on the role of word co-occurrence statistics.

Not ideal if you are looking for a tool to build or fine-tune a practical large language model for an application, as this is a research framework for analysis.

AI Research Natural Language Processing Language Model Analysis Factual Knowledge Co-occurrence Statistics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Dec 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/CheongWoong/impact_of_cooccurrence"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.