RedisAI/redis-inference-optimization
A Redis module for serving tensors and executing deep learning graphs
This project helps MLOps engineers and backend developers efficiently serve machine learning models in production environments. It takes trained deep learning models (like TensorFlow, PyTorch, or ONNX) as input and outputs fast, low-latency predictions by integrating directly with a Redis database. Its primary users are those responsible for deploying and scaling AI/ML applications.
840 stars. No commits in the last 6 months.
Use this if you need to serve machine learning models with high throughput and low latency, especially in an environment already using Redis.
Not ideal if you are looking for an actively maintained or supported solution, as this project is no longer updated.
Stars
840
Forks
107
Language
C
License
—
Category
Last pushed
Aug 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RedisAI/redis-inference-optimization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
tensorflow/tensorflow
An Open Source Machine Learning Framework for Everyone
microsoft/tensorwatch
Debugging, monitoring and visualization for Python Machine Learning and Data Science
KomputeProject/kompute
General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics...
hailo-ai/hailort-drivers
The Hailo PCIe driver is required for interacting with a Hailo device over the PCIe interface
NVIDIA/nvshmem
NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM...