RidgeRun/r2inference
RidgeRun Inference Framework
R² Inference helps embedded system developers create and deploy machine learning models on Google Coral devices. It takes your pre-trained models and optimizes them for efficient execution on Coral hardware, delivering fast, localized AI capabilities. This is for engineers building smart devices, robotics, or industrial automation solutions that need on-device intelligence.
No commits in the last 6 months.
Use this if you are an embedded systems developer working with Google Coral hardware and need a streamlined way to integrate and run your machine learning models directly on the device.
Not ideal if you are a data scientist primarily focused on model training or if you need to deploy models to cloud-based or general-purpose GPU infrastructure.
Stars
27
Forks
10
Language
C++
License
—
Category
Last pushed
Aug 10, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RidgeRun/r2inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX