Y-Research-SBU/CSRv2
Official Repository for CSRv2 - ICLR 2026
This tool helps researchers and developers working with large language models to create highly efficient, "ultra-sparse" embeddings. It takes raw text or image data and outputs significantly smaller, specialized numerical representations that maintain accuracy while reducing computational and storage costs. This is designed for machine learning engineers and AI researchers optimizing large-scale AI applications.
Use this if you need to drastically reduce the size and computational overhead of your text or image embeddings without sacrificing performance for tasks like search, recommendation, or classification.
Not ideal if you are a business user or data analyst looking for a no-code solution, as this requires deep technical knowledge to set up and run.
Stars
10
Forks
—
Language
Python
License
MIT
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Y-Research-SBU/CSRv2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
zhuhanqing/APOLLO
APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention
zhenye234/xcodec
AAAI 2025: Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model
HITESHLPATEL/Mamba-Papers
Awesome Mamba Papers: A Curated Collection of Research Papers , Tutorials & Blogs
psychofict/llm-effective-context-length
Investigating Why the Effective Context Length of LLMs Falls Short (Based on STRING, ICLR 2025)
rishikksh20/mamba3-pytorch
Readable implementation of Mamba 3 SSM model