Blaizzy/mlx-audio-swift

A modular Swift SDK for audio processing with MLX on Apple Silicon

54
/ 100
Established

This is a tool for Apple developers creating macOS and iOS apps that need to process audio. It helps integrate advanced audio features like converting text to speech, transcribing speech to text, identifying who is speaking, and more directly into your applications. You provide text or audio input, and it outputs synthesized speech, transcribed text, or speaker identification data. This is for Swift app developers building powerful audio-centric experiences.

446 stars.

Use this if you are a Swift developer building an application for Apple devices (macOS 14+ or iOS 17+) and want to easily add sophisticated audio processing capabilities like speech synthesis or transcription using optimized machine learning models.

Not ideal if you are not a Swift developer, are not building for Apple Silicon, or only need basic audio recording and playback features.

macOS-app-development iOS-app-development speech-recognition text-to-speech speaker-diarization
No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 18 / 25

How are scores calculated?

Stars

446

Forks

56

Language

Swift

License

MIT

Last pushed

Mar 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/Blaizzy/mlx-audio-swift"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.