christophe0606/MLHelium
TinyLlama on Cortex-M55 using CMSIS-DSP and Helium vector instructions
This project helps embedded systems engineers and firmware developers implement small machine learning models on Arm Cortex-M55 microcontrollers. It provides a way to integrate neural network kernels, optimized using Helium vector instructions and CMSIS-DSP, directly into C code. The output is a highly efficient, custom-built ML inference solution for resource-constrained edge devices.
No commits in the last 6 months.
Use this if you need to run very small, simple machine learning models on Cortex-M55 microcontrollers with maximum efficiency and minimal dependencies.
Not ideal if you require automatic model conversion from frameworks like TensorFlow or PyTorch, or need support for fully quantized kernels and Arm NPUs.
Stars
8
Forks
1
Language
C
License
Apache-2.0
Category
Last pushed
Oct 29, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/christophe0606/MLHelium"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thu-pacman/chitu
High-performance inference framework for large language models, focusing on efficiency,...
NotPunchnox/rkllama
Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning...
sophgo/LLM-TPU
Run generative AI models in sophgo BM1684X/BM1688
Deep-Spark/DeepSparkHub
DeepSparkHub selects hundreds of application algorithms and models, covering various fields of...
howard-hou/VisualRWKV
VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle...