ncnn and csi-nn2

NCNN is a general-purpose mobile neural network inference framework, while CSI-NN2 is a specialized operator library optimized for Xuantie RISC-V CPUs, making them complementary tools that target different hardware platforms rather than direct competitors.

ncnn
71
Verified
csi-nn2
56
Established
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 10/25
Adoption 9/25
Maturity 16/25
Community 21/25
Stars: 22,890
Forks: 4,405
Downloads:
Commits (30d): 26
Language: C++
License:
Stars: 99
Forks: 45
Downloads:
Commits (30d): 0
Language: C
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About ncnn

Tencent/ncnn

ncnn is a high-performance neural network inference framework optimized for the mobile platform

This framework helps mobile app developers integrate high-performance artificial intelligence features into their applications. It takes pre-trained deep learning models and optimizes them to run efficiently on mobile phone CPUs, enabling intelligent functionalities directly on users' devices. The end-user persona for this project is a mobile application developer.

mobile-app-development deep-learning-deployment mobile-ai embedded-ai performance-optimization

About csi-nn2

XUANTIE-RV/csi-nn2

An optimized neural network operator library for chips base on Xuantie CPU.

This is a performance library for developers working with XuanTie CPU-based chips. It takes neural network models and optimizes their operations to run efficiently on XuanTie hardware, including support for various data types and quantization methods. Software engineers and embedded systems developers targeting XuanTie CPUs would use this to accelerate AI workloads.

embedded-systems-development AI-inference-optimization hardware-acceleration edge-AI neural-network-deployment

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work