iwatake2222/play_with_tensorrt
Sample projects for TensorRT in C++
This project provides sample C++ code and project structures to help developers implement high-performance deep learning inference on NVIDIA GPUs using TensorRT. It takes image or video files, or live camera feeds, and processes them with pre-trained models. This is ideal for C++ developers building GPU-accelerated applications that require fast, efficient execution of AI models.
197 stars. No commits in the last 6 months.
Use this if you are a C++ developer needing a clear, multi-platform example to integrate TensorRT into your applications for accelerated AI model inference.
Not ideal if you are not a C++ developer or are looking for a high-level Python library for deep learning inference.
Stars
197
Forks
34
Language
C++
License
Apache-2.0
Category
Last pushed
Feb 17, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iwatake2222/play_with_tensorrt"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX