athrva98/polyinfer

Unified deployment pipeline

33
/ 100
Emerging

This tool helps machine learning engineers and researchers quickly deploy and run their trained models on various hardware, from NVIDIA GPUs to Intel CPUs. It takes a trained model file (like an ONNX file) and runs it efficiently on the fastest available hardware without complex setup, giving back the model's predictions. This is for anyone who needs to get the best performance from their AI models in real-world applications, regardless of the underlying hardware.

Available on PyPI.

Use this if you need to run your AI models as fast as possible on different types of computer hardware, without spending a lot of time on configuration and optimization.

Not ideal if you are developing new AI models and primarily need a training framework, rather than a solution for deploying existing models for performance.

AI-deployment model-inference MLOps edge-AI performance-optimization
Maintenance 6 / 25
Adoption 5 / 25
Maturity 22 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

Apache-2.0

Last pushed

Dec 26, 2025

Commits (30d)

0

Dependencies

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/athrva98/polyinfer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.