✔ This repository contains code to our research work/publication, A Generic Multi-modal Dynamic Gesture Recognition System using Machine Learning which was presented at the IEEE Future of Information and Communications Conference (FICC) 2018, Singapore.
✔ Human Computer Interaction facilitates intelligent communication between humans and computers, in which gesture recognition plays a prominent role. This paper proposes a machine learning system to identify dynamic gestures using tri-axial acceleration data acquired from two public datasets. These datasets - uWave and Sony, were acquired using accelerometers embedded in Wii remotes and smartwatches respectively.
✔ A dynamic gesture signed by the user is characterized by a generic set of features extracted across time and frequency domains.
✔ The system was analyzed from an end-user perspective and was modelled to operate in three modes. The modes of operation determine the subsets of data to be used for training and testing the system.
✔ From an initial set of seven classifiers, three were chosen to evaluate each dataset across all modes rendering the system towards mode-neutrality and dataset-independence.
✔ The proposed system is able to classify gestures performed at varying speeds with minimum preprocessing, making it computationally efficient. Moreover, this system was found to run on a low-cost embedded platform - Raspberry Pi Zero (USD 5), making it economically viable.