adobe-research/convmelspec

Convmelspec: Convertible Melspectrograms via 1D Convolutions

36
/ 100
Emerging

This project helps machine learning engineers and researchers deploy audio-based AI models to mobile devices or other edge environments. It converts audio features, specifically Mel-spectrograms, into a format compatible with on-device machine learning frameworks like CoreML and ONNX. You input a trained audio model (e.g., in PyTorch), and it outputs a portable model file ready for deployment, enabling your audio AI to run efficiently on various platforms.

147 stars. No commits in the last 6 months.

Use this if you need to deploy your audio machine learning model to a mobile device or embedded system and require a Mel-spectrogram computation to be part of the on-device inference.

Not ideal if you are only training audio models and do not need to deploy them to a cross-platform, on-device environment, or if your model does not rely on Mel-spectrogram features.

on-device-AI audio-analysis machine-learning-deployment mobile-AI edge-computing
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

147

Forks

10

Language

Python

License

Apache-2.0

Last pushed

May 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/adobe-research/convmelspec"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.