Using Statistical Classification Algorithms to Decode Covert Speech States with Functional Magnetic Resonance Imaging

Date

2010-11-02

Authors

McCorry, Devin M.

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

An effective covert speech brain-computer interface (BCI) would be a system that decodes a subject’s internal speech-related neural activity, translating it into text or synthesized speech. Multiple clinical populations stand to benefit from such a BCI, most notable of which are patients with locked-in syndrome — a disorder marked by nearcomplete motor paralysis, despite relatively unimpaired cognition. Determining which regions of the brain contain speech information that could be decoded in a covert speech BCI is an important and necessary step towards actual BCI implementation. In the present study I investigated whether cortical areas involved in motor speech production and speech perception contain such information. I used functional magnetic resonance imaging (fMRI) to scan subjects (2 males and 3 females, mean age = 23.6) while they completed 6 runs of a speech task, in which they were prompted to speak one of two syllables: /pah/ or /tah/. In one scanning session, subjects spoke these syllables overtly (aloud); in another session, they covertly―spoke the syllables in their mind without moving. Results from two independent tasks — a task involving motor articulation, and a task involving perception of speech — were used to select regions of interest (ROIs) for each subject. These ROIs were used to subset the activation observed during the speech tasks. I then used multi-voxel pattern analysis (MVPA) to train statistical algorithms to classify which of the two syllables was spoken (overtly or covertly)—based solely on the subset of fMRI data during the speech tasks. The MVPA was conducted using regressed parameter estimates of the syllables from each of the 6 runs. Classification accuracy was significantly above chance in both the speech production and speech perception ROIs for both overt and covert speech (p < .05). Accuracy was significantly higher for overt versus covert speech (p < .05), and a nonsignificant trend suggested higher accuracy in the motor ROI than in the perception ROI. These findings are significant in that they indicate that neural patterns of activity during covert and overt speech may be similar enough to apply overt speech models to methods of decoding inner speech. Importantly, speech motor and perception regions may encode sufficient detail about a person’s internal speech states to decode in a future implementation of a covert speech BCI. Furthermore, the results of this study demonstrate the utility in using MVPA to map out regions to use in future BCIs based on decoding cognitive states.

Description

Keywords

Covert speech, Multi-voxel pattern analysis, Brain-computer interface, Support vector machine, FMRI

Citation