Phoenix8215/A-White-Paper-on-Neural-Network-Deployment

模型部署白皮书(CUDA|ONNX|TensorRT|C++)🚀🚀🚀

44
/ 100
Emerging

This white paper helps machine learning engineers and AI practitioners efficiently deploy deep learning models onto NVIDIA hardware platforms. It guides users through the process of taking a trained neural network model and optimizing it for real-world application performance. The output is a highly performant, deployed model ready for inference in production.

244 stars. No commits in the last 6 months.

Use this if you need to optimize and deploy deep learning models to NVIDIA GPUs for faster, more efficient performance in production environments.

Not ideal if you are solely focused on the theoretical aspects of deep learning model training or deploying to non-NVIDIA hardware.

Machine Learning Engineering AI Deployment Deep Learning Optimization Edge AI GPU Acceleration
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

244

Forks

36

Language

License

GPL-3.0

Last pushed

Sep 18, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Phoenix8215/A-White-Paper-on-Neural-Network-Deployment"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.