poloclub/transformer-explainer
Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
This interactive visualization helps you understand how large language models (LLMs) like GPT generate text. You input your own text, and the tool shows you in real-time how the model's internal parts work together to predict the next words. This is ideal for students, researchers, or anyone curious about the mechanics behind AI text generation.
6,916 stars.
Use this if you want to visually and interactively learn the fundamental operations of Transformer-based AI models.
Not ideal if you are looking for a tool to build or train your own language models or analyze specific model performance metrics.
Stars
6,916
Forks
740
Language
JavaScript
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/poloclub/transformer-explainer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related models
huggingface/text-generation-inference
Large Language Model Text Generation Inference
OpenMachine-ai/transformer-tricks
A collection of tricks and tools to speed up transformer models
IBM/TabFormer
Code & Data for "Tabular Transformers for Modeling Multivariate Time Series" (ICASSP, 2021)
tensorgi/TPA
[NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6)...
lorenzorovida/FHE-BERT-Tiny
Source code for the paper "Transformer-based Language Models and Homomorphic Encryption: an...