AIRI-Institute/Probing_framework
Framework for probing tasks
This framework helps NLP researchers and computational linguists understand what linguistic knowledge large language models (LLMs) learn internally. You input text data annotated in CONLL-U format or based on Universal Dependencies, and it outputs visualizable results showing how well an LLM captures specific linguistic features. This is ideal for academic researchers and practitioners studying LLM interpretability and evaluation.
No commits in the last 6 months.
Use this if you need to systematically evaluate and interpret the linguistic capabilities encoded within large language models across multiple languages and morphosyntactic features.
Not ideal if you are looking for a tool to train a new language model or extract information directly from an LLM for a downstream application.
Stars
31
Forks
9
Language
Python
License
—
Category
Last pushed
Mar 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AIRI-Institute/Probing_framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
galilai-group/stable-pretraining
Reliable, minimal and scalable library for pretraining foundation and world models
CognitiveAISystems/MAPF-GPT
[AAAI-2025] This repository contains MAPF-GPT, a deep learning-based model for solving MAPF...
UKPLab/gpl
Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled...
larslorch/avici
Amortized Inference for Causal Structure Learning, NeurIPS 2022
svdrecbd/mhc-mlx
MLX + Metal implementation of mHC: Manifold-Constrained Hyper-Connections by DeepSeek-AI.