ABSTRACT
Natural sounds are easily perceived and identified by humans and animals. Despite this, the neural transformations that enable sound perception remain largely unknown. It is thought that the temporal characteristics of sounds may be reflected in auditory assembly responses at the inferior colliculus (IC) and which may play an important role in identification of natural sounds. In our study, natural sounds will be predicted from multi-unit activity (MUA) signals collected in the IC. Data is obtained from an international platform publicly accessible. The temporal correlation values of the MUA signals are converted into images. We used two different segment sizes and with a denoising method, we generated four subsets for the classification. Using pre-trained convolutional neural networks (CNNs), features of the images were extracted and the type of heard sound was classified. For this, we applied transfer learning from Alexnet, Googlenet and Squeezenet CNNs. The classifiers support vector machines (SVM), k-nearest neighbour (KNN), Naive Bayes and Ensemble were used. The accuracy, sensitivity, specificity, precision and F1 score were measured as evaluation parameters. By using all the tests and removing the noise, the accuracy improved significantly. These results will allow neuroscientists to make interesting conclusions.
Acknowledgments
Non-author contributors should be included in the Acknowledgement section.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Authors’ contributions
All authors contributed to the development of the study. Material preparation, data collection and analysis were carried out by Fatma Özcan. The work was supervised by Ahmet Alkan. The first draft of the manuscript was written by Fatma Özcan. All authors have read and approved the final manuscript.
Ethics approval and consent to participate
This article does not contain any studies with human participants or animals performed by any of the authors.
Consent for publication
The authors of this work give their consent to the publication of this Manuscript.
Availability of data and materials
The Data supporting the findings of this study are available from “Multi-site neural recordings in the auditory midbrain of unanesthetized rabbits listening natural texture sounds and sound correlation auditory models”. in CRCNS.org (Sadeghi et al. Citation2019).