olilarkin/iPlug2OnnxRuntime

ML Audio plug-in example using iPlug2 & ONNX Runtime

18
/ 100
Experimental

This is an example for audio developers looking to integrate machine learning models into their audio applications or plugins. It demonstrates how to take a trained neural network model for audio processing and embed it directly into an application. The result is an audio plugin that can apply advanced sound transformations, useful for tasks like virtual instrument creation or effects processing. Audio plugin developers and embedded audio engineers would use this.

No commits in the last 6 months.

Use this if you are an audio developer wanting to build a custom audio plugin or application that incorporates machine learning models for sound processing.

Not ideal if you are an end-user musician or producer looking for a ready-to-use audio effect, as this project requires development and compilation.

audio-plugin-development digital-audio-workstation sound-design audio-engineering embedded-audio
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 3 / 25

How are scores calculated?

Stars

36

Forks

1

Language

Python

License

Last pushed

Dec 02, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/olilarkin/iPlug2OnnxRuntime"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.