stanfordnlp/axbench
Stanford NLP Python library for benchmarking the utility of LLM interpretability methods
This is a benchmarking library designed for AI researchers and practitioners who are developing or evaluating methods to understand and control large language models (LLMs). It helps you assess how well your interpretability techniques can detect specific concepts within an LLM's internal workings and how effectively they can steer the model's behavior. You provide concept lists and your interpretability method, and it outputs performance metrics for concept detection and model steering.
175 stars.
Use this if you are developing or rigorously testing new methods to interpret or steer the behavior of large language models.
Not ideal if you are an end-user simply looking to apply an existing interpretability tool to understand a specific LLM output without benchmarking new techniques.
Stars
175
Forks
27
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/stanfordnlp/axbench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
aidatatools/ollama-benchmark
LLM Benchmark for Throughput via Ollama (Local LLMs)
LarHope/ollama-benchmark
Ollama based Benchmark with detail I/O token per second. Python with Deepseek R1 example.
qcri/LLMeBench
Benchmarking Large Language Models
THUDM/LongBench
LongBench v2 and LongBench (ACL 25'&24')
microsoft/LLF-Bench
A benchmark for evaluating learning agents based on just language feedback