Break the language barrier through auto-generated closed captions, derived from hand sign detection using machine learning.
Translate ASL in real time by providing captions as the user is signing various letters.
- Decide on a model type (e.g. R-CNN, Fast R-CNN, Faster R-CNN, YOLO, or others)
- Decide on dataset for model training
- Test the accuracy of the model
- Live translate as camera is pointed as person is performing sign language
- App can verbally inform and read out the captions to the user
- Input a video and output text
- App integrates various sign languages
Flutter can be used for the basic front end of the project. The majority of the time will be spent on developing the backend and training the existing dataset for accuracy.
Install by following the guidelines here
The model can be written in TensorFlow Lite to train the dataset
Install by following the guidlines here
To implement Flutter and your TensorFlow Lite model
Install by following the guidelines here
PyTorch Mobile is a machine learning framework
Install by following the guidelines here
These can be used to implement Flutter and your existing PyTorch model
However, note that this only supports Android and not iOS
For installation and guidelines, click here
For an older version, click here
Below are some resources to help overcome possible roadblocks during the project
- Training Data set of images of alphabets
- Large database of handwritten digits used for training various image processing systems
- Another ASL data set
Look through all of these resources at the beginning of the semester!