Realtime-Sign-Language-Detection-Using-LSTM-Model and Real-Time-Sign-Language-Recognition

Both projects are direct competitors offering real-time sign language recognition using deep learning models, with the key technical difference being that the first project utilizes TensorFlow, while the second uses PyTorch.

Maintenance 6/25
Adoption 9/25
Maturity 16/25
Community 20/25
Maintenance 6/25
Adoption 4/25
Maturity 16/25
Community 12/25
Stars: 78
Forks: 24
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 5
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About Realtime-Sign-Language-Detection-Using-LSTM-Model

AvishakeAdhikary/Realtime-Sign-Language-Detection-Using-LSTM-Model

Realtime Sign Language Detection: Deep learning model for accurate, real-time recognition of sign language gestures using Python and TensorFlow.

This project helps bridge communication gaps by instantly interpreting sign language gestures. You perform gestures in front of a camera, and the system translates them in real-time. It's designed for individuals with hearing impairments and those who communicate with them, such as educators or support staff, to facilitate more natural interaction.

assistive-technology communication-accessibility sign-language-interpretation deaf-community-support real-time-translation

About Real-Time-Sign-Language-Recognition

Uni-Creator/Real-Time-Sign-Language-Recognition

This project implements an LSTM-based model for recognizing sign language gestures, specifically targeting actions like 'hello', 'thanks', 'nothing', and 'I love you'. Using PyTorch, it processes sequences of hand gestures, trains the model, and evaluates performance through confusion matrices and probabilities.

Scores updated daily from GitHub, PyPI, and npm data. How scores work