sachinsharma9780/Build-ML-pipelines-for-Computer-Vision-NLP-and-Graph-Neural-Networks-using-Nvidia-Triton-Server
Build ML pipelines for Computer Vision, NLP and Graph Neural Networks using Triton Server.
This project helps machine learning engineers and MLOps professionals efficiently deploy and manage trained AI models in production. It demonstrates how to set up Nvidia's Triton Inference Server to host various deep learning models, such as those used for natural language processing, computer vision, and graph analysis. The server takes trained models as input and outputs real-time predictions or feature embeddings, maximizing hardware utilization and handling multiple inference requests simultaneously.
No commits in the last 6 months.
Use this if you need to deploy and serve multiple machine learning models at scale, manage inference requests efficiently, and optimize hardware usage for your AI applications.
Not ideal if you are looking for a simple, single-model deployment solution without the need for advanced features like dynamic batching or concurrent model execution.
Stars
42
Forks
9
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 05, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/sachinsharma9780/Build-ML-pipelines-for-Computer-Vision-NLP-and-Graph-Neural-Networks-using-Nvidia-Triton-Server"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
gpu-mode/Triton-Puzzles
Puzzles for learning Triton
hailo-ai/hailo_model_zoo
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment
open-mmlab/mmdeploy
OpenMMLab Model Deployment Framework
hyperai/tvm-cn
TVM Documentation in Chinese Simplified / TVM 中文文档