camail-official/compressm

This is the official repository for the ICLR 2026 paper "The Curious Case of In-Training Compression of State Space Models".

30
/ 100
Emerging

This project helps machine learning researchers and practitioners optimize State Space Models (SSMs) for long sequence data. It takes raw training data for tasks like image classification (sMNIST, sCIFAR), text analysis (IMDB, AAN), or sequence operations (ListOps, Pathfinder) and outputs a smaller, computationally more efficient model that maintains high performance. The end users are researchers or engineers working on deep learning models for sequential data.

Use this if you are training State Space Models and need to reduce their computational cost and memory footprint during the training process without sacrificing significant performance.

Not ideal if you are working with other types of deep learning architectures (e.g., Transformers, CNNs) or if you are not comfortable with experimental machine learning research.

long-sequence-modeling state-space-models model-compression deep-learning-optimization neural-network-efficiency
No Package No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 15 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

Python

License

MIT

Last pushed

Feb 23, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/camail-official/compressm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.