emptysoal/TensorRT-v8-YOLOv5-v5.0

Based on TensorRT v8.2, build network for YOLOv5-v5.0 by myself, speed up YOLOv5-v5.0 inferencing

38
/ 100
Emerging

This project helps developers convert trained YOLOv5-v5.0 object detection models from PyTorch into a highly optimized format for faster inference on NVIDIA GPUs. You provide a trained PyTorch model and an NVIDIA GPU, and it outputs a `.plan` file and a C++ executable for extremely rapid object detection from images or video streams. This is for AI/ML engineers and embedded systems developers deploying vision models in performance-critical applications.

Use this if you need to deploy a YOLOv5-v5.0 model for real-time object detection where every millisecond counts, such as in robotics, autonomous vehicles, or industrial inspection.

Not ideal if you are still in the training or prototyping phase of your YOLOv5 model, or if you don't have access to NVIDIA TensorRT-compatible hardware.

object-detection real-time-inference edge-ai computer-vision model-deployment
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

13

Forks

2

Language

C++

License

MIT

Last pushed

Oct 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/emptysoal/TensorRT-v8-YOLOv5-v5.0"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.