Rishit-dagli/Compositional-Attention

An implementation of Compositional Attention: Disentangling Search and Retrieval by MILA

30
/ 100
Emerging

This is a tool for machine learning researchers and practitioners working with neural networks, especially those using TensorFlow. It provides an alternative to standard Multi-head Attention layers, which process input data like text or images to understand relationships within it. You provide your model's hidden states (tokens) and a mask, and it outputs refined representations that can improve your model's performance.

No commits in the last 6 months. Available on PyPI.

Use this if you are a machine learning engineer or researcher looking for an improved attention mechanism to enhance the performance of your deep learning models.

Not ideal if you are not working directly with neural network architectures or if your project is not built with TensorFlow.

deep-learning neural-networks natural-language-processing computer-vision model-optimization
Stale 6m
Maintenance 0 / 25
Adoption 5 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

Python

License

Apache-2.0

Last pushed

Jun 01, 2022

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Rishit-dagli/Compositional-Attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.