AvishakeAdhikary/Realtime-Sign-Language-Detection-Using-LSTM-Model

Realtime Sign Language Detection: Deep learning model for accurate, real-time recognition of sign language gestures using Python and TensorFlow.

51
/ 100
Established

This project helps bridge communication gaps by instantly interpreting sign language gestures. You perform gestures in front of a camera, and the system translates them in real-time. It's designed for individuals with hearing impairments and those who communicate with them, such as educators or support staff, to facilitate more natural interaction.

Use this if you need a real-time system to detect and interpret sign language gestures from a live video feed, especially for assistive communication.

Not ideal if you need to translate complex spoken language into sign language, as this focuses on interpreting gestures from the user.

assistive-technology communication-accessibility sign-language-interpretation deaf-community-support real-time-translation
No Package No Dependents
Maintenance 6 / 25
Adoption 9 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

78

Forks

24

Language

Jupyter Notebook

License

MIT

Last pushed

Nov 18, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/AvishakeAdhikary/Realtime-Sign-Language-Detection-Using-LSTM-Model"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.