Sign-Language-Interpreter-using-Deep-Learning and Sign-To-Speech-Conversion

Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 24/25
Stars: 740
Forks: 251
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 137
Forks: 101
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Sign-Language-Interpreter-using-Deep-Learning

harshbg/Sign-Language-Interpreter-using-Deep-Learning

A sign language interpreter using live video feed from the camera.

This project helps deaf individuals communicate more easily by translating American Sign Language (ASL) gestures into text in real-time. It takes a live video feed from a camera, identifies the hand signs, and outputs the corresponding letters or words. This tool is designed for deaf people who want a personal, always-available translator for daily communication without needing a human interpreter.

assistive-technology accessibility deaf-community sign-language daily-communication

About Sign-To-Speech-Conversion

beingaryan/Sign-To-Speech-Conversion

Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.

This project helps people communicate using American Sign Language (ASL) by converting their hand gestures into spoken words in real-time. It takes live video of ASL signs as input and outputs audible speech. This tool is designed for individuals who use ASL and want to communicate with hearing people, as well as for those who interact with ASL users.

ASL communication accessibility speech generation video interpretation inclusive communication

Scores updated daily from GitHub, PyPI, and npm data. How scores work