zjunlp/KnowledgeCircuits

[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers

42
/ 100
Emerging

This project helps AI researchers and developers understand how large language models (LLMs) store and use specific pieces of knowledge, like country-capital relationships. By identifying the "knowledge circuits" within these complex models, you can gain insights into their internal workings. It takes a pretrained language model and a specific type of knowledge as input, then outputs a visual representation of the circuit responsible for that knowledge.

164 stars.

Use this if you are a researcher or developer working with large language models and need to investigate which internal components are responsible for specific factual knowledge within them.

Not ideal if you are an end-user simply looking to apply LLMs for tasks like content generation or data analysis, as this is a research tool for model introspection.

AI Research Large Language Models Model Interpretability Deep Learning Transformer Architecture
No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

164

Forks

11

Language

Python

License

MIT

Last pushed

Nov 14, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/KnowledgeCircuits"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.