SARIT42/lipsyncr
LipSyncr is a lip reading web app based on the LipNet model that can lip read videos.
This tool helps you accurately understand spoken words from videos by analyzing lip movements. You upload video footage, and it outputs the transcribed text, making it useful for deciphering conversations where audio is unclear or unavailable. Anyone who needs to extract information from silent or hard-to-hear video content, such as forensic analysts or journalists, would find this beneficial.
No commits in the last 6 months.
Use this if you need to transcribe spoken content from video footage where the audio is poor, nonexistent, or you require an additional verification method.
Not ideal if you primarily need to process live video feeds, as its current scope focuses on pre-recorded video files.
Stars
79
Forks
34
Language
Jupyter Notebook
License
MIT
Category
Last pushed
May 29, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/SARIT42/lipsyncr"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
primepake/wav2lip_288x288
Wav2Lip version 288 and pipeline to train
Chris10M/Lip2Speech
A pipeline to read lips and generate speech for the read content, i.e Lip to Speech Synthesis.
Markfryazino/wav2lip-hq
Extension of Wav2Lip repository for processing high-quality videos.
d-kavinraja/MouthMap
MouthMap is a deep learning-based lip reading system that converts silent video sequences into...
adhadse/Deepdubpy
A complete end-to-end Deep Learning system to generate high quality human like speech in English...