Tencent/Forward
A library for high performance deep learning inference on NVIDIA GPUs.
Forward helps machine learning engineers and researchers accelerate the deployment of their deep learning models on NVIDIA GPUs. It takes trained models from frameworks like TensorFlow, PyTorch, Keras, or ONNX and converts them into high-performance TensorRT inference engines. This allows AI practitioners to achieve faster predictions without complex manual optimization steps.
555 stars. No commits in the last 6 months.
Use this if you need to significantly speed up your deep learning model's prediction time on NVIDIA GPUs across various domains like computer vision, natural language processing, or recommendation systems.
Not ideal if your deployment environment does not use NVIDIA GPUs or if you are not working with deep learning models.
Stars
555
Forks
63
Language
C++
License
—
Category
Last pushed
Jan 29, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Tencent/Forward"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX