LISTENAI/thinker

a lightweight deep learning framework for CSK60XX serial products

46
/ 100
Emerging

This project helps embedded systems developers deploy deep learning models efficiently on resource-constrained hardware, specifically CSK60XX series products. It takes trained neural network models (e.g., from PyTorch) and optimizes them for edge devices, producing highly optimized code for fast and memory-efficient inference. Developers working with AI on specialized hardware will find this useful for getting models from research to deployment.

Use this if you are developing AI applications for embedded systems and need to optimize deep learning models for performance and efficiency on specific hardware like CSK60XX series chips.

Not ideal if you are solely working on model training or general-purpose AI development on cloud or desktop environments without specific hardware deployment needs.

embedded-AI edge-computing deep-learning-deployment hardware-optimization AI-on-chip
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

25

Forks

4

Language

C

License

Apache-2.0

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/LISTENAI/thinker"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.