Academic research is a collaborative endeavor. Faculty members work with post-docs, grad students, and undergrads. Sometimes one lab collaborates with another. During the course of such work, resources sometimes need to be shared or redistributed. At TurkPrime, we have sought to make part of this sharing easier by allowing researchers to transfer funds from one user’s lab balance to another. In this blog, we demonstrate how to use this feature.
One reason Amazon Mechanical Turk has become so popular among researchers is the speed with which data can be collected. Compared to more traditional research methods—lab-based experiments, field studies, ethnographic interviews, etc—MTurk is exceptionally fast, making it possible to collect data for an entire study within a day or sometimes just a few hours. Although MTurk’s speed is nice, there are times when collecting data all at once can actually be a problem. In this blog, we explain how to spread your data collection out across time and why you might want to do so.
Three weeks ago, we published a blog explaining five things you should be doing in your online data collection. In this blog, we follow up with five things you should NOT be doing when collecting data on MTurk.
Researchers are responsible for being an expert, or at least knowledgeable, in several areas. There’s the topic of your research, the methods common within your discipline, best practices for open science, and the mediums used to communicate about your work—just to name a few. For many researchers, online data collection has been revolutionary, helping collect data faster and more affordably than ever before. Yet, with the emergence of online research there is now one more domain to be an expert in. Given the steep learning curve for really learning how to best run online studies, we put together this blog to highlight five practices that if you’re not already doing in your online research, you should be. These practices primarily apply to online research on Amazon’s Mechanical Turk when using TurkPrime’s MTurk Toolkit, but some practices can be applied to other platforms as well.
At TurkPrime, we advocate for requesters to treat workers fairly when posting HITs on Amazon’s Mechanical Turk (MTurk). Workers are, after all, the people who make the research possible. Sometimes situations arise in which an MTurk worker is unable to receive payment, despite having completed a survey. Below are two common scenarios in which a worker may not be paid, despite completing a survey:
- We collected high quality data on MTurk when using TurkPrime’s IP address and Geocode-restricting tools.
- Using a novel format for our anchoring manipulation, we found that Turkers are highly attentive, even under taxing conditions.
- After querying the TurkPrime database, we found that farmer activity has significantly decreased over the last month.
- When used the right way, researchers can be confident they are collecting quality data on MTurk.
- We are continuously monitoring and maintaining data quality on MTurk.
- Starting this month, we will be conducting monthly surveys of data quality on Mechanical Turk.
Last week, the research community was struck with concern that “bots” were contaminating data collection on Amazon’s Mechanical Turk (MTurk). We wrote about the issue and conducted our own preliminary investigation into the problem using the TurkPrime database. In this blog, we introduce two new tools TurkPrime is launching to help researchers combat suspicious activity on MTurk and reiterate some of the important takeaways from this conversation so far.
TurkPrime is announcing a change in our pricing for the MicroBatch feature. MicroBatch is now included as a Pro feature, with a fee of 2 cents + 5% per complete. This will also provide users with access to all other pro features, with no additional charge. This change is necessary so that we can continue to provide the highest quality service and tools that our users expect.
Some workers on MTurk are extremely active, and take the majority of posted HITs. This can lead to many issues, some of which are outlined in our previous post. Although MTurk has over 100,000 workers who take surveys each year, and around 25,000 who take surveys each month, you are much more likely to recruit highly active workers who take a majority of HITs. About 1,000 workers (1% of workers) take 21% of the HITs. About 10,000 workers (10% of workers) take 74% of all HITs.
It is important to consider how many highly experienced workers there are on Mechanical Turk. As discussed in previous posts, there is a population pool of active workers in the thousands, but this is far from exhaustible. A small group of workers take a very large number of HITs posted to MTurk, and these workers are very experienced and have seen measures commonly used in the social and behavioral sciences. Research has shown that when participants are repeatedly exposed to the same measures, this can have negative effects on data collection, changing the way workers perform, creating treatment effects, giving participants insight into the purpose of some studies, and in some cases impact effect sizes of experimental manipulations. This issue is referred to as non-naivete (Chandler, 2014; Chandler, 2016).