tincans-ai/gazelle
Joint speech-language model - respond directly to audio!
This project offers a speech-language model that can directly understand and respond to spoken audio. You provide an audio input, and the model processes both the speech and language to generate a text response. This tool is designed for developers or researchers building applications that require a language model to react immediately to voice commands or spoken content, without needing a separate transcription step.
373 stars. No commits in the last 6 months.
Use this if you are a developer experimenting with advanced AI models that can directly process and respond to spoken audio inputs.
Not ideal if you need a robust, production-ready solution for real-world applications, as these initial versions are not optimized or secure against adversarial attacks.
Stars
373
Forks
33
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 01, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/tincans-ai/gazelle"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model