sdpython/onnx-extended

New operators for the ReferenceEvaluator, new kernels for onnxruntime, CPU, CUDA

57
/ 100
Established

This project helps machine learning engineers and data scientists accelerate their ONNX models by replacing standard operator implementations with highly optimized C++ versions. You provide an ONNX model, and it outputs the same model running significantly faster, especially on small graphs and tensors. It's designed for those deploying or working with ONNX models who need maximum inference speed.

Available on PyPI.

Use this if you need to speed up inference for your ONNX models by leveraging faster, custom C++ implementations or new, extended operators.

Not ideal if you are developing or training models in frameworks like PyTorch or TensorFlow and don't yet have an ONNX model.

Machine-Learning-Deployment Model-Optimization AI-Acceleration Inference-Speedup ONNX-Runtime
Maintenance 10 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 15 / 25

How are scores calculated?

Stars

35

Forks

6

Language

Python

License

MIT

Last pushed

Feb 13, 2026

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sdpython/onnx-extended"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.