Nokia-Bell-Labs/data-channel-extension
[NeurIPS'24] DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators
This project offers a way for AI hardware engineers and researchers to optimize Convolutional Neural Networks (CNNs) for highly constrained 'tiny AI' devices, like those found in IoT or edge computing. It takes existing CNN models and processing specifications for tiny accelerators as input. The output is a more efficient CNN design that runs faster or uses less memory on these specialized, low-power hardware platforms.
No commits in the last 6 months.
Use this if you are designing or evaluating CNN models for extremely resource-limited AI accelerators and need to improve their inference efficiency.
Not ideal if you are working with large-scale deep learning models on powerful GPUs or general-purpose CPUs.
Stars
9
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
Dec 16, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Nokia-Bell-Labs/data-channel-extension"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fastmachinelearning/hls4ml
Machine learning on FPGAs using HLS
alibaba/TinyNeuralNetwork
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
KULeuven-MICAS/zigzag
HW Architecture-Mapping Design Space Exploration Framework for Deep Learning Accelerators
fastmachinelearning/hls4ml-tutorial
Tutorial notebooks for hls4ml
doonny/PipeCNN
An OpenCL-based FPGA Accelerator for Convolutional Neural Networks