Collecting Response Time Data Using Amazon Mechanical Turk




Enochson, Kelly
Culbertson, Jennifer

Journal Title

Journal ISSN

Volume Title



Researchers in linguistics and related fields have recently begun exploiting online crowd-sourcing tools, like Amazon Mechanical Turk (AMT), to gather behavioral data. While this method has been successfully used for various offline measures—grammaticality judgment or other forced-choice tasks—its validity for mainstream psycholinguistic research remains in question. This is because psycholinguistic effects are often dependent on relatively small differences in response times, and there is substantial doubt as to whether precise timing measurements can be gathered over the web. Here we show that three classic psycholinguistic effects can in fact be replicated using AMT in combination with open-source software for gathering response times client-side. Specially, we find reliable effects of subject definiteness, filler-gap dependency processing, and agreement attraction in self-paced reading tasks using approximately the same numbers of participants and/or trials as similar laboratory studies. Our results suggest that psycholinguists can and should be taking advantage of AMT and similar online crowd-sourcing marketplaces as a fast, low-resource alternative to traditional laboratory research.


To access this dataset, go to This record replaces an earlier dataset available at


Psycholinguistics, Crowd-sourcing, Amazon Mechanical Turk