google-deepmind/gemma_penzai

A JAX Research Toolkit for Visualizing, Manipulating, and Understanding Gemma Models with Multi-modal Support based on Penzai.

40
/ 100
Emerging

This toolkit helps AI researchers and interpretability scientists explore and understand how multimodal Large Language Models (LLMs) like Gemma 3 process information. It takes an existing Gemma model (including those with vision capabilities) and allows you to visualize and manipulate its internal mechanisms. The output helps researchers gain deeper insights into model behavior.

Use this if you are an AI researcher or safety scientist who needs to perform mechanistic interpretability on Gemma models, especially multimodal versions, to understand their internal workings.

Not ideal if you are a developer looking to simply deploy or fine-tune Gemma models without needing to deeply analyze their internal computational graphs and behaviors.

AI-interpretability mechanistic-understanding multimodal-LLMs AI-safety neural-network-analysis
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 13 / 25
Community 8 / 25

How are scores calculated?

Stars

90

Forks

5

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Jan 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/google-deepmind/gemma_penzai"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.