Klaus-Chow/Model-Deployment-And-Inference
涉及到pytorch模型移动端的部署,集成一些主流的目标检 测、文本检测和文本识别算法,提供了torch模型到onnx模型的通用接 口,onnx转ncnn模型的功能,移动端模型的量化功能以及模型的推理函数。
This project helps engineers deploy advanced image analysis models, like those for object detection (YOLOv5) or text recognition (DBNet, CRNN), onto mobile devices. It takes trained PyTorch models and converts them into an optimized format (NCNN) suitable for efficient inference on embedded systems. The primary users are engineers working on integrating AI capabilities into mobile applications.
No commits in the last 6 months.
Use this if you need to convert and optimize PyTorch-trained computer vision models for deployment and fast inference on mobile or embedded devices using the NCNN framework.
Not ideal if you are developing desktop applications or cloud-based AI services, or if you don't need to optimize models for resource-constrained environments.
Stars
9
Forks
—
Language
C++
License
—
Category
Last pushed
Mar 03, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Klaus-Chow/Model-Deployment-And-Inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX