UIC-InDeXLab/RSR

An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks

35
/ 100
Emerging

This project helps deep learning practitioners accelerate the inference speed of neural networks, especially those using binary or ternary weights. It takes your pre-trained low-bit neural network models and makes them run faster by optimizing the core matrix multiplication operations. Researchers and engineers working on deploying efficient AI models, particularly for resource-constrained environments, will find this useful.

Use this if you need to significantly speed up the inference time of your binary or ternary neural networks.

Not ideal if your neural networks exclusively use full-precision (32-bit or 16-bit) weights, as the optimizations are specific to low-bit networks.

deep-learning-inference neural-network-optimization low-bit-ai edge-ai model-deployment
No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

17

Forks

Language

Python

License

MIT

Last pushed

Mar 27, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UIC-InDeXLab/RSR"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.