tiny-cuda-nn and tiny-dnn

These are competitors, as both are C++ deep learning frameworks designed for lightweight neural networks, but "tiny-cuda-nn" leverages CUDA for accelerated performance while "tiny-dnn" emphasizes header-only and dependency-free usage.

tiny-cuda-nn
53
Established
tiny-dnn
51
Established
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 4,430
Forks: 550
Downloads:
Commits (30d): 0
Language: C++
License:
Stars: 6,020
Forks: 1,398
Downloads:
Commits (30d): 0
Language: C++
License:
No Package No Dependents
Stale 6m No Package No Dependents

About tiny-cuda-nn

NVlabs/tiny-cuda-nn

Lightning fast C++/CUDA neural network framework

Tiny CUDA Neural Networks helps deep learning engineers efficiently train and query neural networks, particularly Multi-Layer Perceptrons. It takes neural network configurations and training data, and outputs trained models ready for inference. This framework is designed for developers building high-performance deep learning applications that require fast model training and inference.

deep-learning-engineering neural-network-development GPU-accelerated-computing real-time-AI computer-graphics-engineering

About tiny-dnn

tiny-dnn/tiny-dnn

header only, dependency-free deep learning framework in C++14

This project helps embedded systems engineers and IoT device developers integrate deep learning capabilities into their resource-constrained hardware. It takes raw data, such as images or sensor readings, processes it through a neural network, and outputs classifications or predictions directly on the device. This is ideal for developers building intelligent features into edge devices.

embedded-systems IoT-device-development edge-AI real-time-inference resource-constrained-computing

Scores updated daily from GitHub, PyPI, and npm data. How scores work