triton-inference-server/server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

66
/ 100
Established

This tool streamlines the deployment of AI models, making them ready for use in real-world applications. You input trained AI models from various frameworks (like PyTorch or TensorFlow), and it serves predictions or classifications efficiently. Data scientists, MLOps engineers, and developers building AI-powered applications would use this.

10,426 stars. Actively maintained with 19 commits in the last 30 days.

Use this if you need to deploy and manage a wide variety of AI models at scale, ensuring high performance and efficient resource utilization across cloud, data center, or edge environments.

Not ideal if you are a data scientist still in the experimentation phase and not yet ready to deploy a model for production use.

AI model deployment MLOps real-time inference cloud AI edge AI
No Package No Dependents
Maintenance 17 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

10,426

Forks

1,734

Language

Python

License

BSD-3-Clause

Last pushed

Mar 13, 2026

Commits (30d)

19

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/triton-inference-server/server"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.