wesg52/universal-neurons

Universal Neurons in GPT2 Language Models

39
/ 100
Emerging

This project helps researchers and scientists understand the inner workings of large language models like GPT-2 by providing tools to analyze individual 'neurons'. It takes precomputed activation and weight data from these models as input and generates summarized statistics about neuron behavior and their connections to each other, to attention heads, and to vocabulary. The primary users are researchers studying interpretability and mechanistic understanding of neural networks.

No commits in the last 6 months.

Use this if you are a machine learning researcher aiming to explore and analyze the functions of individual neurons within GPT-2 language models to understand their contributions to the model's overall behavior.

Not ideal if you are looking to train new language models, fine-tune existing ones for specific tasks, or generate text directly, as this tool focuses on analyzing model internals rather than application.

AI-interpretability mechanistic-interpretability GPT-2-analysis neural-network-research language-model-science
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

30

Forks

7

Language

Jupyter Notebook

License

MIT

Last pushed

May 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/wesg52/universal-neurons"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.