tk-rusch/LEM
Official code for Long Expressive Memory (ICLR 2022, Spotlight)
This project helps machine learning researchers and practitioners working with sequential data to process long sequences more efficiently. It takes various forms of sequential data, such as time series, text, or image sequences, and outputs improved model performance and faster training times for sequence modeling tasks. This is for AI/ML engineers and researchers who are building and training models on sequential data.
No commits in the last 6 months.
Use this if you are building deep learning models that need to process very long sequences of data and require better performance than standard recurrent neural networks.
Not ideal if you are not working with sequence data or if your existing models already meet performance and speed requirements with standard methods like LSTMs or GRUs.
Stars
71
Forks
11
Language
Python
License
—
Category
Last pushed
Mar 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tk-rusch/LEM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence...
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets...
YalaLab/pillar-finetune
Finetuning framework for Pillar medical imaging models.
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML...