mlx-audio-swift and speech-swift

These are complementary tools where the audio processing SDK provides lower-level signal handling and MLX integration that the speech toolkit builds upon for higher-level ASR, TTS, and diarization tasks.

mlx-audio-swift
54
Established
speech-swift
52
Established
Maintenance 13/25
Adoption 10/25
Maturity 13/25
Community 18/25
Maintenance 13/25
Adoption 10/25
Maturity 11/25
Community 18/25
Stars: 446
Forks: 56
Downloads:
Commits (30d): 0
Language: Swift
License: MIT
Stars: 417
Forks: 46
Downloads:
Commits (30d): 0
Language: Swift
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About mlx-audio-swift

Blaizzy/mlx-audio-swift

A modular Swift SDK for audio processing with MLX on Apple Silicon

This is a tool for Apple developers creating macOS and iOS apps that need to process audio. It helps integrate advanced audio features like converting text to speech, transcribing speech to text, identifying who is speaking, and more directly into your applications. You provide text or audio input, and it outputs synthesized speech, transcribed text, or speaker identification data. This is for Swift app developers building powerful audio-centric experiences.

macOS-app-development iOS-app-development speech-recognition text-to-speech speaker-diarization

About speech-swift

soniqo/speech-swift

AI speech toolkit for Apple Silicon — ASR, TTS, speech-to-speech, VAD, and diarization powered by MLX and CoreML

This project offers a collection of AI speech models that run directly on your Apple Mac or iOS device, without needing internet access. It helps turn spoken words into text, generate natural-sounding speech from text, and analyze audio for who spoke when or to remove background noise. This is ideal for app developers building privacy-focused audio features for Apple users.

iOS-development macOS-development speech-recognition text-to-speech audio-analysis

Scores updated daily from GitHub, PyPI, and npm data. How scores work