Task specific Brain Synchronization in EEG based Speech and Speech Imagery Procedures

Speech being a complicated cognitive skill of the brain, retrieving verbal information from spoken and imagined speech using electrophysiological signal is widely under research to assist the impaired. Electroencephalography has been a successful medium in analyzing the temporal and behavioral changes happening in the brain during any task. In this work an endeavor has been made to distinguish such changes while speaking and envisioning speaking the phoneme of vowel ‘a’. The brain activity was recorded from volunteers while speaking and envisioning speaking the vowel ‘a’ and the basic signal processing routines were applied. The EEG sub-band frequencies were extracted using Fast Fourier Transform (fft). The synchronization indices were assessed for the segmented signals of various sub-band frequencies and the behavioral changes were enlisted. It was seen that the left hemisphere of the brain functioned synchronously during speech task and an inverse phenomenon was observed during the imagery task. Hence the possibility of reconstructing the broken or incomplete speech from the directly recorded brain activity has reached a step ahead towards the milestone. This work exhibits the analysis of brain connectivity parameters while speaking and imagining a vowel ‘a’. Speech imagery can be used to decode the thoughts for patients who are unable to express their thoughts, using non-invasive techniques.


INTRODUCTION
Speech is the most productive way of communication which people use to convey their intention. Speech happens due to the mapping between what a person speaks and the information that is retrieved by the listener. Neurological signatures behind speech production and speech listening remains a challenge for the researchers in the field of cognitive neuroscience [1]. Though speech is a continuous phenomenon, phonetic segments and syllables have been considered as the most important parameters in speech perception. Mental imagery characterizes the capacity of individual to recall and recollect realities that they previously experienced whereas speech imagery refers to imagining of speaking without making any sounds [2]. Speech imagery has proved to be similar with genuine voice correspondence. Different analysts have demonstrated that for kids with discourse and language impairments, there are irregularities in the Broca's region, a region responsible for speech in brain [3].
Electroencephalography is an exceptionally acknowledged, successful and reasonable procedure in contemplating the electrical activity of brain [4]. Signals can be collected noninvasively from different regions of the brain and functional connectivity between these regions during speech and speech imagery can be contemplated. Broca's and Wernick's are being proved as the regions in brain related to speech and imagined speech [5,6]. Brain connectivity which is the method to determine directionality of the interactions among different regions of brain is of great interest for neuro-researchers. Functional connectivity (FC) alludes to the factual reliance between the signs originating from two (or among many)  [7]. It refers to the existence of a relationship between the comparing signals. Generalized synchronization indices are a parameter used for studying the functional connectivity between different regions of the brain [8]. These approaches were used to investigate the interactions between nonlinear dynamical systems without any knowledge. The indices which represent generalized synchronization namely the S-index and N-index are estimated to analyze the synchronized activity of the brain during speech and speech imagery task.
In this work, EEG signals have been recorded form healthy participants while speaking and imagining speaking a vowel "a' by using a predefined protocol. Collected signals have been preprocessed and prepared using various signal processing routines in order to remove any artifacts. Preprocessed signals have been decomposed into five different frequency bands. Functional connectivity parameters using generalized synchronization indices and band power have been calculated. An analysis has been done using the estimated parameters to understand the functionality of brain during speech and speech imagery tasks. Unique brain signatures have been identified for both the protocols, which prove the possibility of reconstructing the broken or incomplete speech from the directly recorded brain activity.  Five trials of EEG were recorded from five volunteers comprising of two women and three men. The mean age of the participants was 21. Participants were clearly explained about the experimental protocol and were given enough training about the difference between the speaking and imagining of speaking. The participants were made to sit comfortably on an armed chair to ensure minimal movement artifacts.

A. EEG Data Acquisition
Electrical activity of brain during speech and speech imagery were recorded by following a 10-20 electrode placement system [9]. EEG data was acquired using the standard protocol, given in figure 2, with the help of the wireless EEG Recording device, Emotiv. Impedance at all sites were maintained below 10 kΩ as per manufacturer's specifications and monitored using the emotiv software. The CMS and DRL electrodes, positioned on left and right ear bones respectively were chosen as reference electrodes during recording [10].

B. EEG Preprocessing
The task related signals alone were segmented for pre-processing. Frequencies less than 1Hz and greater than 40Hz were filtered out using IIR Butterworth high pass filter to remove high frequency noise [11]. Manual rejection of Eye Blink /artifacts was done. Preprocessed signals were normalized. Frequency ranges Delta: 0.01 to 3.5 Hz, Theta: 4 to 7 Hz, Alpha: 8 to 13 Hz, Beta: 14 to 30 Hz, Gamma: 31 to 40 Hz Sub bands of EEG were segmented with using FFT [12].

C. Feature Extraction
The generalized Synchronization index explains the presence of some functional relation between the states of two system performing similar action at a time. The states of a dynamical system Yn are a function of another system Xn, namely Yn = F(Xn), in their reconstructed state space. The process of identifying the function F results in various synchronization indices. In this work, the S-index and Nindex are estimated to analyze the synchronized activity of the brain during each task performed [13].

a) S-Index
The S index [14] is defined as: where S(X|Y) takes the range 0 to 1; 0 means independence and 1 means complete synchronization.

b) N-Index
The N index [15] is defined as: where N (X|Y) falls within the range 0 to 1; 0 means complete independence, and 1 means perfect synchronization.
The above mentioned synchronization Indices are the functional connectivity parameters estimated based on the following two distances: (i) square mean Euclidean distance to its k closest neighbors, ܴ () (ܺ), which is given as (ii) conditional mean squared Euclidean distance, conditioned on the closest neighbor times in the time series Y, ܴ () (ܺ|ܻ), which is given as, where, ‫ݎ‬ , ܽ݊݀ ‫ݏ‬ , , ‫:݊݁ݒ݅݃‬ ݆ = 1,2, … , ݇ denote the time indices of the k nearest neighbors of Xn and Yn, respectively.

RESULTS AND DISCUSSIONS
The completion of basic signal processing routines resulted in a noise and artifact free sub-bands. The motive of band separation is to carry out band-wise analysis to ensure that the subjects reacted in accordance with the designed experimental protocol. The above mentioned synchronization indices were estimated from the segmented bands individually. Since the theta band activity related to speech and speech related tasks have been proved already, theta band features were considered for analysis [11].
Lateralization of the brain states that speech and language are the cognitive behaviors of the left hemisphere and imagination is in control of the right hemisphere of the brain [16]. This statement has been clearly evidenced in the analysis results of this work. The left hemisphere of the brain reacts actively during the speech task and the right hemisphere of the brain reacts more actively for the speech imagery task. It is also observed that the estimated index values fell into a small range of 0 to 0.5 for the speech task. This can also be due to the combined activity of the speech articulation process which includes the motor activity. The presence of the motor cortex in the left hemisphere of the brain also contributes to its activation which might interrupt the synchronized activity of the speech process. Whereas the indices estimated for the speech imagery task ranged up to a maximum of 0.9. This phenomenon was commonly observed all the generalized synchronization indices irrespective of the function of state considered for index estimation. The connectivity estimated with S index and N index during speech and speech imagery tasks are given in figures 3 and 4, respectively.  Figure 3. Connectivity graph for S-index of theta band during speech and speech imagery task The synchronization between F7-O1 (47L-22L Brodmann regions) and T7-O1 (45L-22L) electrodes were high during the speech task. This phenomenon can be interpreted as the synchronization between the temporal and fronto-parietal lobes of the left hemisphere i.e., the Wernicke's and Broca's areas. Further, a higher synchronization between the right hemispheric electrodes P8-T8 (37R-45R), correlates with imagination being a predominant role of the right hemisphere.

CONCLUSIONS
The pre-processing routines and the connectivity analysis performed imply a better understanding about the interactions of the brain regions during the speaking and imagining speaking tasks. EEG subband analysis suggests higher activity in theta band during speech related tasks. Hence the generalized synchronization parameters, S and N indices, were estimated for the segmented theta band signal. The indices for the speech and speech imagery protocols showed the difference in synchronization in the left and right hemispheres, respectively. The interaction between left hemispheric frontal and temporal lobes with occipital lobe was higher during speech task. Whereas interaction between right hemispheric temporal and parietal lobes was higher during speech imagery task.
The observations prove unique signatures of brain during speaking and imagining processes. Results suggest that this non-invasive technique can be used as a method to retrieve verbal information from spoken and imagined speech. This work can be further extended to identify how the features of different phonemes. Speaking of different phonemes have different connectivity patterns in the brain that can be extracted from the EEG. Those unique signatures can be registered and used to train an appropriate machine learning tool to identify the imagined phoneme. Hence enabling speech imagery to decode the thoughts for patients who are unable to express their thoughts, might bring a change in the lifestyle of speech impaired people in communication with the outside world.