Rishit-dagli/Compositional-Attention
An implementation of Compositional Attention: Disentangling Search and Retrieval by MILA
This is a tool for machine learning researchers and practitioners working with neural networks, especially those using TensorFlow. It provides an alternative to standard Multi-head Attention layers, which process input data like text or images to understand relationships within it. You provide your model's hidden states (tokens) and a mask, and it outputs refined representations that can improve your model's performance.
No commits in the last 6 months. Available on PyPI.
Use this if you are a machine learning engineer or researcher looking for an improved attention mechanism to enhance the performance of your deep learning models.
Not ideal if you are not working directly with neural network architectures or if your project is not built with TensorFlow.
Stars
14
Forks
—
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 01, 2022
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Rishit-dagli/Compositional-Attention"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models