intel/npu-nn-cost-model

Library for modelling performance costs of different Neural Network workloads on NPU devices

44
/ 100
Emerging

This project helps embedded systems engineers and AI solution architects predict the performance of different neural network workloads on Intel's Neural Processing Units (NPUs) before deploying them. By providing details about a neural network operation (like input/output dimensions, kernel size, and execution mode), it outputs the estimated DPU cycles, helping users optimize their AI models for NPU devices. This is for professionals who design and optimize AI systems for specialized hardware.

Use this if you are developing or deploying neural network models on Intel NPU devices and need to accurately estimate their performance costs to make informed design choices.

Not ideal if you are a data scientist primarily focused on model training and experimentation without a direct need to optimize for specific edge AI hardware performance characteristics.

embedded-AI edge-computing neural-network-optimization hardware-performance-prediction AI-solution-architecture
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 11 / 25

How are scores calculated?

Stars

34

Forks

4

Language

C++

License

Apache-2.0

Last pushed

Mar 11, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/intel/npu-nn-cost-model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.