There is a high level of ‘inattentiveness’ in even the top-ranked workers recruited for research through Amazon Mechanical Turk, says a new study co-authored by Neil Stott of Cambridge Judge Business School.
Companies, organisations and academics often use Amazon Mechanical Turk, a leading web-based tool that enables researchers to recruit participants online for surveys and other tasks at lower cost, time and effort. The service has top crowd workers – those classified as “Master”, with an approval rate of 98% or higher and a number of Human Intelligence Tasks (HITs) approved value of 1,000 or more.
A new study finds, however, that even among these elite “Turkers” (as they are known) there is a substantial level of people who are inattentive to the tasks before them – and the authors urge researchers to take measures to address this even though it may increase the cost.
Failing the attention check
Specifically, the study found that 126 of 564 participants (22.3%) failed at least one of the three forms of attention check – with 94 failing the honesty check, 31 failing the logic check, and 27 failing the time check. The logic check required participants to demonstrate “comprehension of logical relationships”; the honesty check asked participants to state their perceptions of their effort and data validity for the study; the time check used a “response time” to determine whether participants were able to complete the experiment within a reasonable time, using a conservative reading rate of 200 words per minute.
For example, participants failed the logic test if they did not respond that they “strongly agree” to these two statements: “At some point in my life, I have had to consume water in some form”, and “I would rather eat a piece of fruit than a piece of paper”.
The study – entitled “The hidden cost of using Amazon Mechanical Turk for research” – forms a chapter in the Lecture Notes in Computer Science book series.
“When sourcing participants through Amazon Mechanical Turk, researchers expect the participants to be ‘attentive’ and answer the questions diligently and in good faith,” said study co-author Neil Stott, Faculty (Professor level) in Management Practice and Co-Director of the Cambridge Centre for Social Innovation at Cambridge Judge Business School. “Yet we found that a significant number of the premium ‘top workers’ were ‘inattentive’ and this can seriously undermine research projects.”
Researchers should adjust their process
“We recommend that new attention checks be added into the process, irrespective of participants’ presumed quality based on MTurk criteria including ‘Master’ – and these measures, such as identifying participants who don’t pay attention and recruiting additional participants to replace submissions that must be rejected, would require researchers to adjust their proposals to account for this additional effort and cost. Amazon Mechanical Turk is certainly a useful tool for many researchers, and some studies about it have been reassuring, but our findings suggest that caution and additional measures are needed,” says Neil.
While Amazon does not disclose real-time data on the total number of workers available for hire via its Mechanical Turk service, some previously published research reported more than 400,000 workers registered in 2010 and an estimated 50,000 to 100,000 HITs at any given time, the study says.
The study is based on 564 individuals from the US, mostly in the 31-55 age range and with at least some undergraduate education, with 52% men and 48% women. Participants were asked to read a vignette describing one of four hypothetical technology products, and were then asked about their intention to adopt that technology through two questionnaires totalling 126 questions.
The study offered participants $22 per hour, compared to a median Amazon Mechanical Turk rate of $3.01 per hour in the US and the current federal minimum wage of $7.25 per hour, but still found “substantial evidence of inattentiveness”. The authors also examined age, gender, income, marital status, race, and schooling, but found no relationships concerning participant attention.
Higher payment doesn’t fix the problem
“We limited our sample to the top crowd workers and provided a high level of compensation, seeking to ensure that those recruited were motivated,” says the study’s lead author, Antonios Saravanos of New York University, who is a graduate of the Masters in Social Innovation degree programme at Cambridge Judge. “It was therefore extremely surprising to find inattentive participants within our sample. The findings demonstrate that one cannot use money as a motivator to resolve the inattentiveness problem by selecting the ‘Master’ ranking (which Amazon charges for through a 20% extra fee for those workers), nor the approval rating or number of HITs filtering mechanisms.
“Consequently, there is no quick fix to resolve inattentiveness. The only option left for researchers is to accept that they will have inattentive subjects within their MTurk sample and address the situation accordingly.”
The book chapter is co-authored by Antonios Saravanos, Stavros Zervoudakis, Dongnanzi Zheng, Bohdan Hawryluk, and Donatella Delfino of New York University, and by Neil Stott of Cambridge Judge Business School.