RhythmusByte/Sign-Language-to-Speech

Real-time ASL interpreter using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text and speech output. Built with accessibility in mind.

27
/ 100
Experimental

This tool helps bridge communication gaps by translating American Sign Language (ASL) hand gestures into spoken words and text in real-time. It takes live video of a person signing and converts their hand movements into understandable output. Anyone interacting with deaf or hard-of-hearing individuals, such as educators, customer service professionals, or family members, could use this to facilitate smoother conversations.

Use this if you need a real-time way to understand ASL gestures and have them spoken aloud or displayed as text.

Not ideal if you need to translate complex conversations with nuanced facial expressions or body language beyond basic hand gestures.

ASL translation communication assistance accessibility tools inclusive education customer support
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

9

Forks

Language

Python

License

BSD-3-Clause

Last pushed

Dec 25, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/RhythmusByte/Sign-Language-to-Speech"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.