triton-inference-server/dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

57
/ 100
Established

This tool helps machine learning engineers accelerate the data preparation stage for deep learning models, especially during inference. It takes raw input data, like images or sensor readings, and efficiently processes it using GPU-accelerated pipelines. The output is pre-processed data ready for your deep learning model, speeding up the overall application performance.

141 stars.

Use this if you need to significantly speed up the data pre-processing step for your deep learning inference applications, especially when dealing with large volumes of data like images or video.

Not ideal if your deep learning models do not require intensive data pre-processing or if your inference server does not use NVIDIA GPUs.

deep-learning-inference data-pre-processing GPU-acceleration machine-learning-operations computer-vision
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

141

Forks

35

Language

C++

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/triton-inference-server/dali_backend"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.