UIC-InDeXLab/RSR
An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks
This project helps deep learning practitioners accelerate the inference speed of neural networks, especially those using binary or ternary weights. It takes your pre-trained low-bit neural network models and makes them run faster by optimizing the core matrix multiplication operations. Researchers and engineers working on deploying efficient AI models, particularly for resource-constrained environments, will find this useful.
Use this if you need to significantly speed up the inference time of your binary or ternary neural networks.
Not ideal if your neural networks exclusively use full-precision (32-bit or 16-bit) weights, as the optimizations are specific to low-bit networks.
Stars
17
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UIC-InDeXLab/RSR"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
quic/efficient-transformers
This library empowers users to seamlessly port pretrained models and checkpoints on the...
ManuelSLemos/RabbitLLM
Run 70B+ LLMs on a single 4GB GPU — no quantization required.
alpa-projects/alpa
Training and serving large-scale neural networks with auto parallelization.
arm-education/Advanced-AI-Hardware-Software-Co-Design
Hands-on course materials for ML engineers to master extreme model quantization and on-device...
IST-DASLab/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes...