justincdavis/trtutils

Quickly and easily use TensorRT inside of Python with support for models out-of-the-box

45
/ 100
Emerging

This tool helps machine learning engineers and researchers efficiently deploy and run their trained deep learning models on NVIDIA GPUs. You can take your pre-trained TensorRT models, feed them raw data (like images or sensor readings), and get back processed inference results very quickly. It's designed for those who need to integrate high-performance AI models into applications.

Used by 1 other package. Available on PyPI.

Use this if you are a machine learning engineer or researcher looking to run TensorRT models for fast inference in a Python environment without dealing with complex GPU memory management.

Not ideal if you are primarily training models or if you need to deploy models on non-NVIDIA hardware.

deep-learning-deployment model-inference gpu-optimization edge-ai computer-vision
Maintenance 13 / 25
Adoption 7 / 25
Maturity 25 / 25
Community 0 / 25

How are scores calculated?

Stars

19

Forks

Language

Python

License

MIT

Last pushed

Mar 18, 2026

Commits (30d)

0

Dependencies

8

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/justincdavis/trtutils"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.