hailo-ai/hailort
An open source light-weight and high performance inference framework for Hailo devices
This is a runtime library that helps developers integrate and run deep learning models on Hailo AI accelerator hardware. It allows you to take trained AI models and execute them efficiently on Hailo devices, producing inference results. The primary users are embedded systems engineers and AI solution developers who are building products with Hailo's specialized AI chips.
172 stars.
Use this if you are developing an embedded application that needs to perform high-performance AI inference directly on Hailo-10 or Hailo-15 AI accelerator devices.
Not ideal if you are looking for a general-purpose AI development framework that runs on standard CPUs or GPUs, or if your hardware is not a Hailo AI accelerator.
Stars
172
Forks
68
Language
C++
License
—
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/hailo-ai/hailort"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX