Problem: Suppose you need to run a HIT with 1000 Workers. Or a HIT that is only open to Workers who have an approval Rating of 95% or more and have completed 500 HITs or more. Although when you launched your HIT the MTurk Workers arrived at a nice pace, over time, the pace has slowed to a trickle such that your HIT will never complete.
What can you do to speed up your HIT?
Solution: TurkPrime.com Feature: Restart HIT
Simply use the TurkPrime "Restart" feature which Restarts HITs that have become sluggish. When you Restart a HIT you get the effect that it gets "bumped up" the Worker visibility as if your HIT just started.
Tuesday, March 24, 2015
Monday, March 23, 2015
Problem: Suppose you need to run a group of HITs open only to participants who are women under 50. You previously ran a HIT and know the WorkerId's that you want to reach, but have no way to email them and limit your survey to only them. How can you proceed?
Solution: TurkPrime.com Worker Groups and Worker Emails
1. TurkPrime recently added a new feature called Worker Groups which allows any MTurk Requester to create a Reusable Worker Group based on MTurk Worker's WorkerId.
Friday, March 13, 2015
An IRB will generally request a description of how participants will be recruited, reimbursed and interacted with. Additionally IRBs always request information about how anonymity of the participants is protected. Members of the IRB board may not be familiar with Amazon Turk, and it may be helpful to include a brief description of MTurk in your IRB application. Note that many MTurk studies will be exempt from review, provided that the nature of MTurk is explained clearly enough, and the anonymity of the data collection process is made clear.
Thursday, March 12, 2015
Seventy five Mechanical Turk studies conducted with US-based Workers in 2013 and 2014 were reviewed. From a total of 32, 595 Workers, 15,324 (47%) were female.
It’s been a while since the last update on the demographics of Mechanical Turk Workers, so we thought it’s time for a new look. The current consensus seems to be that MTurk Workers are primarily female. For example Panos Ipeirotis' blog reports that US-based Workers are 65% female. MTurk is always changing, and this report presents data from 75 studies conducted over the last two years.
Monday, March 9, 2015
The simple formula
We describe a general formula for predicting the time it takes Workers to complete survey studies on MTurk. The average Worker takes 10.3 seconds to answer a single question. This means that a study with 60 questions should take approximately 10 minutes. At $6 per hour the appropriate pay rate for a 60 question survey would be $1.
The slightly more nuanced approach
We also show that increasing the pay rate and decreasing the length of a survey can increase the average time that Workers spend on each question by 36%. Pay rate and the number of questions in a HIT both influence how long Workers spend answering questions. Workers spend less time on each question for longer surveys and for surveys that pay less. Survey length is also a moderator of the association between pay rate and the time that Workers spend answering questions.
A more detailed approach to predicting the length of a survey is depicted in Figure 1, which takes both survey length and pay rate into consideration when predicting the time it takes Workers to answer a single question. For longer surveys with 108 questions or above, time per question is closer to 8.3 seconds per question, and is independent of pay rate. For medium length surveys with 65 questions, time per survey can range between 9.2 seconds for a pay rate of $1.8 per hour, 10 seconds per question at $3.50 per hour, and 10.6 seconds for a pay rate of over $5 per hour. Shorter surveys with 28 questions are answered at a rate of 10 seconds per question at $1.80 per hour, 11.5 seconds per question at $3.50 per hour and over 13.2 seconds for pay rates of over $5 per hour. Overall, higher pay rates are most effective at increasing completion time for shorter surveys.
This approach probably also generalizes to non-MTurk online surveys and paper and pencil surveys, but more research should be done to compare completion time across different platforms.
Sunday, March 8, 2015
It is generally thought that pay rate does not affect data quality on Mechanical Turk. For example (Buhrmester, Kwang, & Gosling, 2011) showed that whether Workers are paid 5 cents or one dollar for a survey study, the internal reliability of the surveys does not change. They did show however that fewer Workers will take the surveys that pay less. We recently replicated these findings for both US and India-based Workers (Litman et al, 2014). Here we show that low pay rates have two effects on Workers: 1) Workers are more likely to return a HIT before completing it and 2) Workers spend less time answering questions. We examined 30 MTurk studies that were run over the last 6 months. The findings show that 36% of the dropout rate variance is explained by the length and pay rate of a survey. These results show that low pay rates do more than just slow down the rate at which Workers take HITs. Low pay rates may also negatively impact the representativeness of data due to high participant dropout, and they may also decrease how much attention participants pay to each question. Based on these findings we recommend against low paying HIT We also recommend against overly long surveys, unless Workers are appropriately compensated. To minimize dropout and to maximize time on task, compensation for HITs should not be below $4 per hour and should be closer to $6 per hour or more.
Friday, March 6, 2015
What is the completion rate and dropout rate?
Dropout rate is defined as the percentage of participants who start taking a study but do not complete it. Dropout rate is sometimes referred to as attrition rate, and is the opposite of completion rate (dropout rate = 100 – completion rate). On MTurk, completion rate is defined as the number of Workers who submit a HIT divided by the number of Workers who accept the HIT. Note that, for the definition of completion rate used here, Rejected Workers are counted as completes.
Why is completion rate important?
Completion rate is an important indicator of data quality. A low completion rate indicates that there is a selection bias which may be influencing the representativeness of the results. A very high dropout rate may also mean that there is something wrong with the study. It is typically good practice to report completion rate in the method/results section of a paper. Indeed, some editors require authors to use the CHERRIES checklist for survey research (Eysenbach, 2004), which asks about a study’s completion rate.