FluidAudio and mlx-audio-swift

FluidAudio
71
Verified
mlx-audio-swift
54
Established
Maintenance 25/25
Adoption 10/25
Maturity 15/25
Community 21/25
Maintenance 13/25
Adoption 10/25
Maturity 13/25
Community 18/25
Stars: 1,689
Forks: 214
Downloads:
Commits (30d): 98
Language: Swift
License: Apache-2.0
Stars: 446
Forks: 56
Downloads:
Commits (30d): 0
Language: Swift
License: MIT
No Package No Dependents
No Package No Dependents

About FluidAudio

FluidInference/FluidAudio

Frontier CoreML audio models in your apps — text-to-speech, speech-to-text, voice activity detection, and speaker diarization. In Swift, powered by SOTA open source.

This project helps Apple app developers integrate advanced audio AI features directly into their macOS and iOS applications. It takes raw audio input and can output transcribed text, detect voice activity, identify different speakers, or convert text into spoken audio, all running efficiently on the device itself. App developers can use this to add robust voice capabilities to their products.

iOS-development macOS-development speech-recognition voice-user-interface audio-processing

About mlx-audio-swift

Blaizzy/mlx-audio-swift

A modular Swift SDK for audio processing with MLX on Apple Silicon

This is a tool for Apple developers creating macOS and iOS apps that need to process audio. It helps integrate advanced audio features like converting text to speech, transcribing speech to text, identifying who is speaking, and more directly into your applications. You provide text or audio input, and it outputs synthesized speech, transcribed text, or speaker identification data. This is for Swift app developers building powerful audio-centric experiences.

macOS-app-development iOS-app-development speech-recognition text-to-speech speaker-diarization

Scores updated daily from GitHub, PyPI, and npm data. How scores work