Mason Archival Repository Service

Using Statistical Classification Algorithms to Decode Covert Speech States with Functional Magnetic Resonance Imaging

Show simple item record

dc.contributor.advisor Thompson, James C.
dc.contributor.author McCorry, Devin M.
dc.creator McCorry, Devin M.
dc.date 2010-07-09
dc.date.accessioned 2010-11-02T11:30:51Z
dc.date.available NO_RESTRICTION en_US
dc.date.available 2010-11-02T11:30:51Z
dc.date.issued 2010-11-02
dc.identifier.uri https://hdl.handle.net/1920/6022
dc.description.abstract An effective covert speech brain-computer interface (BCI) would be a system that decodes a subject’s internal speech-related neural activity, translating it into text or synthesized speech. Multiple clinical populations stand to benefit from such a BCI, most notable of which are patients with locked-in syndrome — a disorder marked by nearcomplete motor paralysis, despite relatively unimpaired cognition. Determining which regions of the brain contain speech information that could be decoded in a covert speech BCI is an important and necessary step towards actual BCI implementation. In the present study I investigated whether cortical areas involved in motor speech production and speech perception contain such information. I used functional magnetic resonance imaging (fMRI) to scan subjects (2 males and 3 females, mean age = 23.6) while they completed 6 runs of a speech task, in which they were prompted to speak one of two syllables: /pah/ or /tah/. In one scanning session, subjects spoke these syllables overtly (aloud); in another session, they covertly―spoke the syllables in their mind without moving. Results from two independent tasks — a task involving motor articulation, and a task involving perception of speech — were used to select regions of interest (ROIs) for each subject. These ROIs were used to subset the activation observed during the speech tasks. I then used multi-voxel pattern analysis (MVPA) to train statistical algorithms to classify which of the two syllables was spoken (overtly or covertly)—based solely on the subset of fMRI data during the speech tasks. The MVPA was conducted using regressed parameter estimates of the syllables from each of the 6 runs. Classification accuracy was significantly above chance in both the speech production and speech perception ROIs for both overt and covert speech (p < .05). Accuracy was significantly higher for overt versus covert speech (p < .05), and a nonsignificant trend suggested higher accuracy in the motor ROI than in the perception ROI. These findings are significant in that they indicate that neural patterns of activity during covert and overt speech may be similar enough to apply overt speech models to methods of decoding inner speech. Importantly, speech motor and perception regions may encode sufficient detail about a person’s internal speech states to decode in a future implementation of a covert speech BCI. Furthermore, the results of this study demonstrate the utility in using MVPA to map out regions to use in future BCIs based on decoding cognitive states.
dc.language.iso en_US en_US
dc.subject covert speech en_US
dc.subject multi-voxel pattern analysis en_US
dc.subject brain-computer interface en_US
dc.subject support vector machine en_US
dc.subject fMRI en_US
dc.title Using Statistical Classification Algorithms to Decode Covert Speech States with Functional Magnetic Resonance Imaging en_US
dc.type Thesis en
thesis.degree.name Master of Arts Psychology en_US
thesis.degree.level Master's en
thesis.degree.discipline Psychology en
thesis.degree.grantor George Mason University en


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search MARS


Browse

My Account

Statistics