Tencent/PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.

49
/ 100
Emerging

This framework helps machine learning engineers and AI application developers shrink large deep learning models for faster performance on devices with limited computing power, like mobile phones. You provide your existing deep learning model and specify desired compression or speed-up ratios. The framework then automatically outputs a smaller, faster model ready for deployment, maintaining accuracy as much as possible.

2,914 stars. No commits in the last 6 months.

Use this if you need to deploy your deep learning models for tasks like computer vision or speech recognition on mobile devices or other resource-constrained environments.

Not ideal if you are working with traditional machine learning models or if computational efficiency is not a primary concern for your deployment target.

deep-learning-deployment mobile-ai model-optimization inference-acceleration edge-ai
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 23 / 25

How are scores calculated?

Stars

2,914

Forks

492

Language

Python

License

Last pushed

Mar 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Tencent/PocketFlow"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.