beingaryan/Sign-To-Speech-Conversion

Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.

50
/ 100
Established

This project helps people communicate using American Sign Language (ASL) by converting their hand gestures into spoken words in real-time. It takes live video of ASL signs as input and outputs audible speech. This tool is designed for individuals who use ASL and want to communicate with hearing people, as well as for those who interact with ASL users.

137 stars. No commits in the last 6 months.

Use this if you need a way to translate American Sign Language gestures from a live video feed into spoken English words, facilitating communication with hearing individuals.

Not ideal if you need to translate sign languages other than American Sign Language, or if you require advanced sentence construction and nuance beyond converting individual alphabet signs into speech.

ASL communication accessibility speech generation video interpretation inclusive communication
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 24 / 25

How are scores calculated?

Stars

137

Forks

101

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 07, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/beingaryan/Sign-To-Speech-Conversion"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.