LipSyncr is a lip reading web application designed to accurately decipher spoken words from video footage. Built upon the powerful LipNet model, this application employs advanced deep learning techniques to analyze and interpret lip movements with remarkable precision.
The dataset used for training the model is a subset of the Grid Corpus Dataset .
Used gdown
to download a subset (1 speaker) of the full dataset (34 speakers) from google drive.
To Download complete dataset please run the following line of code in your terminal:
bash GridCorpus-Downloader.sh FirstSpeaker SecondSpeaker
where FirstSpeaker and SecondSpeaker are integers for the number of speakers to download
NOTE: Speaker 21 is missing from the GRID Corpus dataset due to technical issues.
- Python-Tensorflow-Keras -> data preparation, pipeline, model training & testing.
- Streamlit -> web application.
- LipNet -> lip reading model architecture idea.
- ffmpeg -> video file format conversion
- opencv -> video capture and frames processing.
Implementing dlib for video processing to include all types of videos, including live video input.
- LipNet: End-to-End Sentence-level Lipreading - Yannis M. Assael, Brendan Shillingford, Shimon Whiteson, Nando de Freitas
- Sequence Modelling with Connectionist Temporal Classification(CTC), an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems.
- LipNet: End-to-End Sentence-level Lipreading - GitHub Code implementation
- Keras Automatic Speech Recognition With CTC