PaddlePaddle/Paddle-Lite
PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)
This tool helps developers deploy deep learning models to a wide range of mobile, embedded, and edge devices. It takes models trained in PaddlePaddle (or converted from other frameworks) and optimizes them for speed and efficiency on target hardware. The output is a highly optimized, lightweight model ready for high-performance inference on various end-user devices. This is for software engineers and machine learning engineers who need to deploy AI models in resource-constrained environments.
7,233 stars. No commits in the last 6 months.
Use this if you need to run machine learning models directly on mobile phones, IoT devices, or other edge hardware with optimal speed and minimal resource usage.
Not ideal if your models only run on powerful cloud servers or desktop PCs, or if you are not working with deep learning inference.
Stars
7,233
Forks
1,627
Language
C++
License
Apache-2.0
Category
Last pushed
May 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PaddlePaddle/Paddle-Lite"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
PaddlePaddle/Paddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice...
fastai/fastai
The fastai deep learning library
openvinotoolkit/openvino_notebooks
📚 Jupyter notebook tutorials for OpenVINO™
PaddlePaddle/docs
Documentations for PaddlePaddle
msuzen/bristol
Parallel random matrix tools and complexity for deep learning