zjunlp/DynamicKnowledgeCircuits

[ACL 2025] How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training

29
/ 100
Experimental

This research project investigates how Large Language Models (LLMs) learn and store new information during continual pre-training. It takes raw knowledge entities and training data as input, then identifies and evaluates the specific neural 'circuits' that handle this knowledge. The project helps AI researchers and machine learning engineers better understand and improve how LLMs acquire and retain new facts.

No commits in the last 6 months.

Use this if you are a researcher or engineer looking to delve into the internal mechanisms of LLMs and optimize their knowledge acquisition capabilities during ongoing training.

Not ideal if you are looking for an off-the-shelf tool to directly apply LLMs to real-world tasks or enhance their performance without understanding the underlying neural processes.

LLM training optimization AI model interpretability continual learning research knowledge representation in AI neural network analysis
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 3 / 25

How are scores calculated?

Stars

47

Forks

1

Language

Jupyter Notebook

License

MIT

Last pushed

Jul 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/DynamicKnowledgeCircuits"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.