iree-org/iree
A retargetable MLIR-based machine learning compiler and runtime toolkit.
IREE helps machine learning engineers optimize their models to run efficiently across diverse hardware, from powerful data centers to smaller mobile and edge devices. It takes an existing machine learning model and processes it into a highly optimized format that can execute quickly on the target hardware. This tool is for ML infrastructure engineers or embedded systems developers who need to deploy performant ML models.
3,655 stars. Actively maintained with 197 commits in the last 30 days.
Use this if you need to deploy machine learning models onto various hardware platforms, ensuring optimal performance and efficiency, especially in constrained environments like mobile or edge devices.
Not ideal if you are a data scientist primarily focused on model training or experimentation and not concerned with the low-level deployment optimization on diverse hardware.
Stars
3,655
Forks
862
Language
C++
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
197
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iree-org/iree"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Recent Releases
Related frameworks
brucefan1983/GPUMD
Graphics Processing Units Molecular Dynamics
uxlfoundation/oneDAL
oneAPI Data Analytics Library (oneDAL)
rapidsai/cuml
cuML - RAPIDS Machine Learning Library
NVIDIA/cutlass
CUDA Templates and Python DSLs for High-Performance Linear Algebra
ROCm/Tensile
[DEPRECATED] Moved to ROCm/rocm-libraries repo