homerjed/set_transformer
Implementation of a Set Transformer in JAX from the paper 'Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks' by Lee et al. (2019)
This is a specialized neural network model designed for machine learning researchers and practitioners working with unordered collections of data. It takes in a set of data points, like a group of images or sensor readings where the order doesn't matter, and processes them to produce an output that is also independent of the input order. It's particularly useful for tasks where the relationships within a collection are important, but their sequence is not.
No commits in the last 6 months.
Use this if you are a machine learning researcher or practitioner needing to process sets of data where the order of elements should not influence the outcome, such as point clouds or bags of features, and you are working within the JAX framework.
Not ideal if your data has an inherent sequential order (like time-series data or text) that needs to be preserved and leveraged, or if you are not using JAX.
Stars
4
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Jul 24, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/homerjed/set_transformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.