Classifying Sound Sources Based On Directions Using Audio Visual Stimulus

There are times when a person is in an environment where they must focus on one conversation when multiple others are happening around them. This is referred to as the cocktail party phenomenon. Individuals with impaired hearing lack this ability. This paper gives insight into how the brain handles these situations and how it filters out what a person is not focusing on. Three video monitors were placed in front of the subject and each source played video and audio. The objective was to find any changes in the accuracy of classification when video stimuli are provided. Using a g. Nautilus headset and multiple audio and video sources, EEG is collected from a subject. This data is collected for each data source. Each dataset is used to train a single machine learning classifier which distinguishes the source of sound with a certain accuracy. The results yield an average accuracy of 94.28%

[1]  F. Torres,et al.  Electroencephalography: Basic Principles, Clinical Applications and Related Fields , 1983 .

[2]  Lisa Stifelman The cocktail party e ect in auditory interfaces: A study of simultaneous presentation , 1994 .

[3]  Kiran George,et al.  Focus Detection Using Spatial Release From Masking , 2020, 2020 10th Annual Computing and Communication Workshop and Conference (CCWC).

[4]  Kiran George,et al.  Sensory Audio Focusing Detection Using Brain-Computer Interface Archetype , 2019, 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI).

[5]  Thomas Lunner,et al.  A system identification approach to determining listening attention from EEG signals , 2016, 2016 24th European Signal Processing Conference (EUSIPCO).