Department of English

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 3 of 3
  • Item
    Which phonetic features should pronunciation Instructions focus on? An evaluation on the accentedness of segmental/syllable errors in L2 speech
    (Research in Language, 2018) Gao, Zhiyan; Weinberger, Steven
    Many English language instructors are reluctant to incorporate pronunciation instruction into their teaching curriculum (Thomson 2014). One reason for such reluctance is that L2 pronunciation errors are numerous, and there is not enough time for teachers to address all of them (Munro and Derwing 2006; Thomson 2014). The current study aims to help language teachers set priorities for their instruction by identifying the segmental and structural aspects of pronunciation that are most foreign-accented to native speakers of American English. The current study employed a perception experiment. 100 speech samples selected from the Speech Accent Archive (Weinberger 2016) were presented to 110 native American English listeners who listened to and rated the foreign accentedness of each sample on a 9-point rating scale. 20 of these samples portray no segmental or syllable structure L2 errors. The other 80 samples contain a single consonant, vowel, or syllable structure L2 error. The backgrounds of the speakers of these samples came from 52 different native languages. Global prosody of each sample was controlled for by comparing its F0 contour and duration to a native English sample using the Dynamic Time Warping method (Giorgino 2009). The results show that 1) L2 consonant errors in general are judged to be more accented than vowel or syllable structure errors; 2) phonological environment affects accent perception, 3) occurrences of non-English consonants always lead to higher accentedness ratings; 4) among L2 syllable errors, vowel epenthesis is judged to be as accented as consonant substitutions, while deletion is judged to be less accented or not accented at all. The current study, therefore, recommends that language instructors attend to consonant errors in L2 speech while taking into consideration their respective phonological environments.
  • Item
    Collecting Psycholinguistic Response Time Data Using Amazon Mechanical Turk
    (Public Library of Science, 2015-03-30) Enochson, Kelly; Culbertson, Jennifer
    Researchers in linguistics and related fields have recently begun exploiting online crowd-sourcing tools, like Amazon Mechanical Turk (AMT), to gather behavioral data. While this method has been successfully validated for various offline measures—grammaticality judgment or other forced-choice tasks—its use for mainstream psycholinguistic research remains limited. This is because psycholinguistic effects are often dependent on relatively small differences in response times, and there remains some doubt as to whether precise timing measurements can be gathered over the web. Here we show that three classic psycholinguistic effects can in fact be replicated using AMT in combination with open-source software for gathering response times client-side. Specifically, we find reliable effects of subject definiteness, filler-gap dependency processing, and agreement attraction in self-paced reading tasks using approximately the same numbers of participants and/or trials as similar laboratory studies. Our results suggest that psycholinguists can and should be taking advantage of AMT and similar online crowd-sourcing marketplaces as a fast, low-resource alternative to traditional laboratory research.
  • Item
    Collecting Response Time Data Using Amazon Mechanical Turk
    (2014-11-02) Enochson, Kelly; Culbertson, Jennifer
    Researchers in linguistics and related fields have recently begun exploiting online crowd-sourcing tools, like Amazon Mechanical Turk (AMT), to gather behavioral data. While this method has been successfully used for various offline measures—grammaticality judgment or other forced-choice tasks—its validity for mainstream psycholinguistic research remains in question. This is because psycholinguistic effects are often dependent on relatively small differences in response times, and there is substantial doubt as to whether precise timing measurements can be gathered over the web. Here we show that three classic psycholinguistic effects can in fact be replicated using AMT in combination with open-source software for gathering response times client-side. Specially, we find reliable effects of subject definiteness, filler-gap dependency processing, and agreement attraction in self-paced reading tasks using approximately the same numbers of participants and/or trials as similar laboratory studies. Our results suggest that psycholinguists can and should be taking advantage of AMT and similar online crowd-sourcing marketplaces as a fast, low-resource alternative to traditional laboratory research.
The works represented by this collection are protected by copyright.