open-mmlab/mmdeploy
OpenMMLab Model Deployment Framework
MMDeploy helps AI developers take pre-trained deep learning models from OpenMMLab and convert them into a format that runs efficiently on various hardware, like different CPUs or GPUs, across multiple operating systems including Linux, Windows, and Android. It takes a trained model and produces an optimized version ready for deployment, enabling faster performance in real-world applications. This tool is for AI and machine learning engineers who need to deploy computer vision models.
3,107 stars. Used by 1 other package. No commits in the last 6 months. Available on PyPI.
Use this if you need to optimize and deploy a computer vision model built with OpenMMLab frameworks to run effectively on specific hardware for production use.
Not ideal if you are looking for a tool to train new deep learning models or if your models are not based on the OpenMMLab ecosystem.
Stars
3,107
Forks
704
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 30, 2024
Commits (30d)
0
Dependencies
11
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/open-mmlab/mmdeploy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
gpu-mode/Triton-Puzzles
Puzzles for learning Triton
hailo-ai/hailo_model_zoo
The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment
hyperai/tvm-cn
TVM Documentation in Chinese Simplified / TVM 中文文档
triton-inference-server/model_analyzer
Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory...