LISTENAI/thinker
a lightweight deep learning framework for CSK60XX serial products
This project helps embedded systems developers deploy deep learning models efficiently on resource-constrained hardware, specifically CSK60XX series products. It takes trained neural network models (e.g., from PyTorch) and optimizes them for edge devices, producing highly optimized code for fast and memory-efficient inference. Developers working with AI on specialized hardware will find this useful for getting models from research to deployment.
Use this if you are developing AI applications for embedded systems and need to optimize deep learning models for performance and efficiency on specific hardware like CSK60XX series chips.
Not ideal if you are solely working on model training or general-purpose AI development on cloud or desktop environments without specific hardware deployment needs.
Stars
25
Forks
4
Language
C
License
Apache-2.0
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/LISTENAI/thinker"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
apache/tvm
Open Machine Learning Compiler Framework
uxlfoundation/oneDNN
oneAPI Deep Neural Network Library (oneDNN)
Tencent/ncnn
ncnn is a high-performance neural network inference framework optimized for the mobile platform
OpenMined/TenSEAL
A library for doing homomorphic encryption operations on tensors
iree-org/iree-turbine
IREE's PyTorch Frontend, based on Torch Dynamo.