Realtime-Sign-Language-Detection-Using-LSTM-Model and Sign-To-Speech-Conversion

Maintenance 6/25
Adoption 9/25
Maturity 16/25
Community 20/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 24/25
Stars: 78
Forks: 24
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 137
Forks: 101
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About Realtime-Sign-Language-Detection-Using-LSTM-Model

AvishakeAdhikary/Realtime-Sign-Language-Detection-Using-LSTM-Model

Realtime Sign Language Detection: Deep learning model for accurate, real-time recognition of sign language gestures using Python and TensorFlow.

This project helps bridge communication gaps by instantly interpreting sign language gestures. You perform gestures in front of a camera, and the system translates them in real-time. It's designed for individuals with hearing impairments and those who communicate with them, such as educators or support staff, to facilitate more natural interaction.

assistive-technology communication-accessibility sign-language-interpretation deaf-community-support real-time-translation

About Sign-To-Speech-Conversion

beingaryan/Sign-To-Speech-Conversion

Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.

This project helps people communicate using American Sign Language (ASL) by converting their hand gestures into spoken words in real-time. It takes live video of ASL signs as input and outputs audible speech. This tool is designed for individuals who use ASL and want to communicate with hearing people, as well as for those who interact with ASL users.

ASL communication accessibility speech generation video interpretation inclusive communication

Scores updated daily from GitHub, PyPI, and npm data. How scores work