MurtyShikhar/TreeProjections

Tool to measure tree-structuredness of the internal algorithm learnt by a transformer

32
/ 100
Emerging

This tool helps AI researchers understand how well transformer models learn tree-like structures, which are important for tasks like natural language processing. It takes a trained transformer model and a dataset, then outputs a 'tree projection score' that indicates how tree-structured the model's internal representations are. This is useful for researchers analyzing the internal workings and interpretability of neural networks.

No commits in the last 6 months.

Use this if you are an AI researcher studying transformer models and want to quantify how much their internal representations resemble hierarchical, tree-like structures.

Not ideal if you are looking for a tool to build or train new transformer models, or to improve model performance on a specific task.

AI research NLP analysis Neural network interpretability Transformer models
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

12

Forks

2

Language

Python

License

MIT

Last pushed

May 24, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MurtyShikhar/TreeProjections"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.