vicharak-in/Axon-NPU-Guide

This repository contains guide on how to setup toolkits to use NPU present on Axon for running various CNN models

41
/ 100
Emerging

This guide helps embedded systems engineers and AI developers efficiently run their AI models on Axon single-board computers. It provides instructions for setting up the necessary toolkits to convert custom CNN models into the Axon's native .rknn format, enabling them to leverage the onboard Neural Processing Unit (NPU) for faster inference. You input your existing CNN or LLM models, and the guide shows you how to process them for optimized performance on Axon hardware.

Use this if you need to deploy and optimize custom computer vision or large language models for high-performance inference on Axon-based embedded devices.

Not ideal if you are looking for a plug-and-play solution for pre-trained models without needing to understand the underlying toolkit setup or model conversion process.

embedded-AI edge-computing AI-model-deployment neural-network-optimization computer-vision-inference
No License No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

25

Forks

6

Language

Shell

License

Last pushed

Mar 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/vicharak-in/Axon-NPU-Guide"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.