RedisAI/redis-inference-optimization

A Redis module for serving tensors and executing deep learning graphs

48
/ 100
Emerging

This project helps MLOps engineers and backend developers efficiently serve machine learning models in production environments. It takes trained deep learning models (like TensorFlow, PyTorch, or ONNX) as input and outputs fast, low-latency predictions by integrating directly with a Redis database. Its primary users are those responsible for deploying and scaling AI/ML applications.

840 stars. No commits in the last 6 months.

Use this if you need to serve machine learning models with high throughput and low latency, especially in an environment already using Redis.

Not ideal if you are looking for an actively maintained or supported solution, as this project is no longer updated.

MLOps Model Serving AI/ML Deployment Backend Development Real-time Inference
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

840

Forks

107

Language

C

License

Last pushed

Aug 20, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RedisAI/redis-inference-optimization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.