rahul13ramesh/compositional_capabilities
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks
This project helps machine learning researchers understand how autoregressive Transformer models learn to combine different functions. It takes configuration files defining synthetic tasks and desired data formats as input, and outputs trained models and evaluation results on how well the models perform function compositions, including new, unseen ones. It's designed for researchers studying the internal mechanisms and generalization capabilities of large language models.
No commits in the last 6 months.
Use this if you are a machine learning researcher studying the interpretability and compositional generalization of autoregressive Transformer models on controlled, synthetic tasks.
Not ideal if you are looking for a tool to apply to real-world, messy datasets or to build production-ready applications.
Stars
10
Forks
1
Language
Python
License
MIT
Category
Last pushed
Jun 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/rahul13ramesh/compositional_capabilities"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
Nicolepcx/Transformers-in-Action
This is the corresponding code for the book Transformers in Action