kokoro-onnx and Kokoro-FastAPI
These are complements: the ONNX runtime implementation provides the core inference engine while the FastAPI wrapper provides a production-ready server interface for deploying that same model across different hardware backends.
About kokoro-onnx
thewh1teagle/kokoro-onnx
TTS with kokoro and onnx runtime
This tool helps you convert written text into natural-sounding speech. You provide text and select from various voices and languages, and it produces an audio file of that text being spoken. It's ideal for developers who need to integrate high-quality text-to-speech capabilities into their applications.
About Kokoro-FastAPI
remsky/Kokoro-FastAPI
Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model w/CPU ONNX and NVIDIA GPU PyTorch support, handling, and auto-stitching
This project helps content creators, educators, and developers quickly turn written text into natural-sounding speech across multiple languages like English, Japanese, and Chinese. You provide text and select voices, and it outputs high-quality audio files, even allowing for custom voice mixes. It's designed for individuals or teams needing on-demand, customizable text-to-speech capabilities.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work