onnx and onnx-tensorrt
ONNX-TensorRT is a backend implementation that enables ONNX models to be executed on NVIDIA TensorRT, making them complements that are used together for optimized inference on NVIDIA hardware.
About onnx
onnx/onnx
Open standard for machine learning interoperability
This project offers an open-source format for AI models, helping AI developers use different machine learning tools interchangeably. It takes an AI model trained in one framework and converts it into a standardized format, allowing it to be used (especially for scoring/inferencing) in another framework or hardware. AI developers who build and deploy machine learning models are the primary users.
About onnx-tensorrt
onnx/onnx-tensorrt
ONNX-TensorRT: TensorRT backend for ONNX
This project helps deep learning engineers and AI practitioners take ONNX neural network models and run them efficiently on NVIDIA GPUs using TensorRT. It takes an ONNX model as input and produces an optimized TensorRT engine that executes deep learning inferences at high speed. This tool is for those who need to deploy and accelerate AI models in production environments.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work