isarsoft/yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server

49
/ 100
Emerging

This tool helps machine learning engineers and MLOps specialists deploy real-time object detection models like YOLOv4 efficiently. It takes a pre-trained YOLOv4 model and optimizes it for NVIDIA GPUs using TensorRT, then makes it available for live use via Triton Inference Server. The output is a highly performant, scalable inference service ready for integration into applications requiring fast object detection from images or video streams.

284 stars. No commits in the last 6 months.

Use this if you need to deploy a YOLOv4 object detection model to production with maximum speed and throughput on NVIDIA GPU hardware, managing it easily as a service.

Not ideal if you are looking for a pre-built application that performs object detection out-of-the-box without requiring deployment expertise.

MLOps real-time object detection GPU inference model deployment computer vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

284

Forks

64

Language

C++

License

Last pushed

Jun 02, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/isarsoft/yolov4-triton-tensorrt"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.