justincdavis/trtutils
Quickly and easily use TensorRT inside of Python with support for models out-of-the-box
This tool helps machine learning engineers and researchers efficiently deploy and run their trained deep learning models on NVIDIA GPUs. You can take your pre-trained TensorRT models, feed them raw data (like images or sensor readings), and get back processed inference results very quickly. It's designed for those who need to integrate high-performance AI models into applications.
Used by 1 other package. Available on PyPI.
Use this if you are a machine learning engineer or researcher looking to run TensorRT models for fast inference in a Python environment without dealing with complex GPU memory management.
Not ideal if you are primarily training models or if you need to deploy models on non-NVIDIA hardware.
Stars
19
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Dependencies
8
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/justincdavis/trtutils"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with...
roflcoopter/viseron
Self-hosted, local only NVR and AI Computer Vision software. With features such as object...
blakeblackshear/frigate
NVR with realtime local object detection for IP cameras
levan92/deep_sort_realtime
A really more real-time adaptation of deep sort
notAI-tech/NudeNet
Lightweight nudity detection