PaddlePaddle/Paddle-Lite

PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)

53
/ 100
Established

This tool helps developers deploy deep learning models to a wide range of mobile, embedded, and edge devices. It takes models trained in PaddlePaddle (or converted from other frameworks) and optimizes them for speed and efficiency on target hardware. The output is a highly optimized, lightweight model ready for high-performance inference on various end-user devices. This is for software engineers and machine learning engineers who need to deploy AI models in resource-constrained environments.

7,233 stars. No commits in the last 6 months.

Use this if you need to run machine learning models directly on mobile phones, IoT devices, or other edge hardware with optimal speed and minimal resource usage.

Not ideal if your models only run on powerful cloud servers or desktop PCs, or if you are not working with deep learning inference.

mobile-AI edge-computing deep-learning-deployment embedded-systems model-optimization
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

7,233

Forks

1,627

Language

C++

License

Apache-2.0

Last pushed

May 22, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/PaddlePaddle/Paddle-Lite"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.