A joint research study was conducted on Acoustic Scene Classification and the findings were published in here. The code and final paper is provided to improve research scope in this domain. Dataset is available in Kaggle.
Acoustic scene classification is a process of characterizing and classifying the environments from sound recordings. The first step is to generate features (representations) from the recorded sound and then classify the background environments. However, different kinds of representations have dramatic effects on the accuracy of the classification. In this paper, we explored the three such representations on classification accuracy using neural networks. We investigated the spectrograms, MFCCs, and embeddings representations using different CNN networks and autoencoders. Our dataset consists of sounds from three settings of indoors and outdoors environments – thus, the dataset contains sounds from six different kinds of environments. We found that the spectrogram representation has the highest classification accuracy while MFCC has the lowest classification accuracy. We reported our findings, insights, and some guidelines to achieve better accuracy for environment classification using sounds.