gsarti/lcl23-xnlm-lab

Materials for the Lab "Explaining Neural Language Models from Internal Representations to Model Predictions" at AILC LCL 2023 🔍

32
/ 100
Emerging

This project provides practical exercises for understanding how neural language models (NLMs) process information and arrive at their predictions. It helps researchers and practitioners explore the internal workings of NLMs, using techniques to evaluate their linguistic knowledge and identify potential biases or errors. You'll work with NLM inputs and observe their internal representations and final outputs.

No commits in the last 6 months.

Use this if you are a researcher or practitioner in natural language processing (NLP) who needs to deeply understand, diagnose, and interpret the behavior of neural language models.

Not ideal if you are looking for a plug-and-play tool for general NLM application without focusing on interpretability or internal analysis.

Natural Language Processing Model Interpretability AI Explainability Linguistic Analysis Machine Learning Research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

May 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/gsarti/lcl23-xnlm-lab"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.