Applying Brain Signals Sonification for Automatic Classification
E.F. González Castañeda, A.A. Torres-García, C.A. Reyes-García, L. Villaseñor-Pineda
In recent years sonification of electroencephalograms (EEG) has been used as an alternative to analyze brain signals after converting EEG to audio. In this paper we applied the sonification to EEG signals during the imagined speech or unspoken speech, with the aim of improving the automatic classification of 5 words of Spanish. To check this, the brain signals of 27 healthy subjects were processed. These sonificated signals were processed to extract features with two different methods: discrete wavelet transform (DWT); and the Mel-frequencies cepstral coefficients (MFCC). The latter commonly used in speech recognition tasks. To classify the signals three different classification algorithms Naive Bayes (NB), Support Vector Machine (SVM) and Random Forest (RF) were applied. Results were obtained using the 4 channels closest to the language areas of Broca and Wernicke, as well as the 14 channels of the EEG device used. The percentages of average accuracy for the 27 subjects in the two sets of 4 and 14 channels using EEG sonification were 55.83% and 64.14% respectively, which are improvements in the classification rates of the imagined words compared with a scheme without sonification.