By now, most people have heard of the gig economy and have some idea of how it works. In the gig economy, people perform short-term jobs or tasks to earn money. Gig economy jobs are considered independent or contract work, meaning people who work in the gig economy often trade the benefits and stability of traditional employment for the freedom and flexibility to decide when and how much they work. Some of the most easily identifiable gig economy platforms are Uber, Lyft, AirBnB, and the slightly less mentioned Mechanical Turk or MTurk.
A persistent cause of concern for researchers who conduct studies online is understanding what participants might be doing while completing their study. When participants are outside the lab, they cannot be observed and distracting aspects of the environment cannot be controlled by the research team. As a result, researchers are left to wonder: how much attention are participants giving my survey?
In this blog, we report on one small aspect of this issue by describing the work style adopted by workers on Amazon’s Mechanical Turk.
Last month, we published a blog titled, “Five Things you Should Not be Doing in Online Data Collection.” Among the things we identified that you should not be doing was launching your study without piloting it first. As a way to reiterate how important we think this issue is, we describe in this blog how to easily conduct a pilot study using TurkPrime.
In this blog, we highlight some subtle and not so subtle aspects of the TurkPrime Dashboard you can use to make navigation and completing study-related tasks easier.
What is a Survey Group?
Survey Groups are one of the most powerful and dynamic tools on TurkPrime for controlling which workers are eligible and ineligible for your study. A Survey Group is exactly what it sounds like: a collection of surveys or studies you have grouped together. Survey Groups are useful when you want to ensure your studies have unique workers. This may be a set of studies investigating the same topic or multiple studies being run in your lab at the same time for which you want no overlap in participants.
A new year represents the opportunity to consider priorities, set goals, chart new courses of action, and decide how to move forward in the coming months. At TurkPrime, we’re moving into 2019 looking for ways to expand the tools and services we offer to researchers. In addition to several initiatives we’re already working on, we want to hear from you about the tools and features that can make your research easier. To this end, we’re announcing the launch of an online Suggestion Box.
- Since early August, researchers have worried that “bots” are contaminating data collected on MTurk.
- We found workers who submit HITs from suspicious geolocations are using server farms to hide their true location.
- When using TurkPrime tools to block workers from server farms, we collected high quality data from MTurk workers.
- We also collected data from workers who use server farms to learn more about them.
- Our evidence suggests recent data quality problems are tied to foreign workers, not bots.
In this blog, we review recent data quality issues on Mechanical Turk and report the results of a study we conducted to investigate the problem.
Data quality on online platforms
When researchers collect data online, it’s natural to be concerned about data quality. Participants aren’t in the lab, so researchers can’t see who is taking their survey, what those participants are doing while answering questions, or whether participants are who they say they are. Not knowing is unsettling.
Some workers on MTurk are extremely active, and take the majority of posted HITs. This can lead to many issues, some of which are outlined in our previous post. Although MTurk has over 100,000 workers who take surveys each year, and around 25,000 who take surveys each month, you are much more likely to recruit highly active workers who take a majority of HITs. About 1,000 workers (1% of workers) take 21% of the HITs. About 10,000 workers (10% of workers) take 74% of all HITs.