PCfVW/plip-rs

Mechanistic interpretability toolkit for code LLMs, in Rust. Analysis of attention patterns in transformers (StarCoder2 3B, Qwen2.5-Coder 3B & 7B, CodeGemma 7B, Phi-3-mini-4k, Code-LLaMA-7B) and state dynamics in RNNs (RWKV-6-Finch-1B6).

25
/ 100
Experimental

This project helps AI researchers understand how large language models (LLMs) for code process programming language syntax. It takes code snippets with test markers (like Python's `>>>` or Rust's `#[test]`) and reveals how the model's internal attention mechanisms focus on different parts of the code. Researchers working on mechanistic interpretability or evaluating LLM comprehension will find this tool useful.

Use this if you are an AI researcher investigating the internal workings of code-focused language models and want to analyze their attention patterns related to specific syntax.

Not ideal if you are a developer looking for an LLM for code generation, an end-user tool for code analysis, or if you don't have a strong understanding of mechanistic interpretability concepts.

AI research mechanistic interpretability language model evaluation code comprehension transformer analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 11 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Rust

License

Apache-2.0

Last pushed

Mar 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PCfVW/plip-rs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.