Tencent/TNN
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework.
TNN is a deep learning inference framework that helps integrate AI features into applications across various devices. It takes pre-trained AI models and efficiently runs them, making functions like real-time face detection, object recognition, or smart text analysis available in your products. This tool is for application developers and AI engineers who need to deploy high-performance, lightweight AI capabilities on mobile, desktop, or server platforms.
4,626 stars. No commits in the last 6 months.
Use this if you need to embed fast, efficient artificial intelligence features like image analysis, facial recognition, or text understanding directly into your software applications for a wide range of devices.
Not ideal if you are looking for a tool to train new AI models or if your primary need is general-purpose machine learning model development and experimentation rather than deployment.
Stars
4,626
Forks
774
Language
C++
License
—
Category
Last pushed
May 09, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Tencent/TNN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX