gsarti/lcl23-xnlm-lab
Materials for the Lab "Explaining Neural Language Models from Internal Representations to Model Predictions" at AILC LCL 2023 🔍
This project provides practical exercises for understanding how neural language models (NLMs) process information and arrive at their predictions. It helps researchers and practitioners explore the internal workings of NLMs, using techniques to evaluate their linguistic knowledge and identify potential biases or errors. You'll work with NLM inputs and observe their internal representations and final outputs.
No commits in the last 6 months.
Use this if you are a researcher or practitioner in natural language processing (NLP) who needs to deeply understand, diagnose, and interpret the behavior of neural language models.
Not ideal if you are looking for a plug-and-play tool for general NLM application without focusing on interpretability or internal analysis.
Stars
13
Forks
2
Language
Jupyter Notebook
License
Apache-2.0
Last pushed
May 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gsarti/lcl23-xnlm-lab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...