iree-org/iree

A retargetable MLIR-based machine learning compiler and runtime toolkit.

73
/ 100
Verified

IREE helps machine learning engineers optimize their models to run efficiently across diverse hardware, from powerful data centers to smaller mobile and edge devices. It takes an existing machine learning model and processes it into a highly optimized format that can execute quickly on the target hardware. This tool is for ML infrastructure engineers or embedded systems developers who need to deploy performant ML models.

3,655 stars. Actively maintained with 197 commits in the last 30 days.

Use this if you need to deploy machine learning models onto various hardware platforms, ensuring optimal performance and efficiency, especially in constrained environments like mobile or edge devices.

Not ideal if you are a data scientist primarily focused on model training or experimentation and not concerned with the low-level deployment optimization on diverse hardware.

ML-deployment edge-AI model-optimization embedded-systems-ML cross-platform-ML
No Package No Dependents
Maintenance 22 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

3,655

Forks

862

Language

C++

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

197

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iree-org/iree"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.