Nokia-Bell-Labs/data-channel-extension

[NeurIPS'24] DEX: Data Channel Extension for Efficient CNN Inference on Tiny AI Accelerators

29
/ 100
Experimental

This project offers a way for AI hardware engineers and researchers to optimize Convolutional Neural Networks (CNNs) for highly constrained 'tiny AI' devices, like those found in IoT or edge computing. It takes existing CNN models and processing specifications for tiny accelerators as input. The output is a more efficient CNN design that runs faster or uses less memory on these specialized, low-power hardware platforms.

No commits in the last 6 months.

Use this if you are designing or evaluating CNN models for extremely resource-limited AI accelerators and need to improve their inference efficiency.

Not ideal if you are working with large-scale deep learning models on powerful GPUs or general-purpose CPUs.

TinyML Edge AI Hardware Acceleration Embedded AI CNN Optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

9

Forks

1

Language

Jupyter Notebook

License

Last pushed

Dec 16, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Nokia-Bell-Labs/data-channel-extension"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.