RidgeRun/r2inference

RidgeRun Inference Framework

40
/ 100
Emerging

R² Inference helps embedded system developers create and deploy machine learning models on Google Coral devices. It takes your pre-trained models and optimizes them for efficient execution on Coral hardware, delivering fast, localized AI capabilities. This is for engineers building smart devices, robotics, or industrial automation solutions that need on-device intelligence.

No commits in the last 6 months.

Use this if you are an embedded systems developer working with Google Coral hardware and need a streamlined way to integrate and run your machine learning models directly on the device.

Not ideal if you are a data scientist primarily focused on model training or if you need to deploy models to cloud-based or general-purpose GPU infrastructure.

embedded-development edge-ai machine-learning-deployment robotics-engineering industrial-automation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

27

Forks

10

Language

C++

License

Last pushed

Aug 10, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/RidgeRun/r2inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.