- Amid the Bot Scare on MTurk in the summer of 2018, researchers reported that bad data often came from respondents linked to repeated geolocations.
- However, a deeper understanding of geolocations suggests there is little reason to believe that repeated geolocations are inherently tied to bad data quality.
- We describe the difference between repeated geolocations that come from server farms and those that do not and we test the quality of data from the top 200 repeated geolocations not tied to server farms.
- Repeated geolocations that are not tied to server farms were a source of high quality data and comparable to data obtained from non-repeated geolocations.
- Based on these results, we believe duplicate geolocations are not inherently problematic. Further, we are adjusting the default setting of our Block Duplicate Geolocations Feature to “OFF” and adding a pop-up to inform researchers about the consequences of using this tool.
Most social science research relies on convenience sampling of participants, meaning few samples look like let alone represent the general population. For many research questions, convenience samples are not a problem. Yet, for other questions, being able to capture and represent the opinions of people from different groups is essential. Because most researchers do not routinely gather these kinds of samples, knowing where to find one when it’s needed can be difficult. Using TurkPrime you can easily and affordably obtain a sample matched to the demographics of the US census on our market research platform called Prime Panels.
Sampling by Age: Why do it?
People of different ages vary greatly in their beliefs and behaviors. For example, a recent Pew report outlines wide generational gaps in people’s opinions on several political issues like presidential job approval, perceptions of racism, views on immigration, and political ideology (Pew Research Center, 2018). Furthermore, some issues, like the use of Medicare, depend on age and therefore are more relevant to older adults than younger ones. For researchers who study such questions, not being able to recruit enough older participants may decrease the generalizability of their findings. While considering the age of participants is less common in social research than other areas of research, researchers who seek to recruit older adults online may find themselves hindered by the small number of older adults available. So, what can researchers do to recruit older adults in online social research?
At TurkPrime, we advocate for requesters to treat workers fairly when posting HITs on Amazon’s Mechanical Turk (MTurk). Workers are, after all, the people who make the research possible. Sometimes situations arise in which an MTurk worker is unable to receive payment, despite having completed a survey. Below are two common scenarios in which a worker may not be paid, despite completing a survey:
amazon mechanical turk,
how to compensate workers,
how to compensate workers on TurkPrime
Studying pairs of people (e.g., married couples, friends, coworkers, etc) is becoming increasingly commonplace in the social and behavioral sciences. Online participant populations, such as Mechanical Turk and other online panels, can potentially serve as a rich source of dyadic participants. However, conducting dyadic research online also faces multiple challenges that need to be overcome in order to obtain high quality results. This blog post will outline some of the challenges of running dyadic studies online, as well as the ways our MTurk Toolkit can best be used to run a dyadic study, with recommendations for best practices based on our experience. Using the methods outlined in this blog, researchers have been able to successfully run numerous dyadic studies using the MTurk Toolkit.
romantic couples studies,
- We collected high quality data on MTurk when using TurkPrime’s IP address and Geocode-restricting tools.
- Using a novel format for our anchoring manipulation, we found that Turkers are highly attentive, even under taxing conditions.
- After querying the TurkPrime database, we found that farmer activity has significantly decreased over the last month.
- When used the right way, researchers can be confident they are collecting quality data on MTurk.
- We are continuously monitoring and maintaining data quality on MTurk.
- Starting this month, we will be conducting monthly surveys of data quality on Mechanical Turk.
amazon mechanical turk,
A case study from a recent JESP article
A new study appearing in the Journal of Experimental Social Psychology suggests Americans strongly believe in economic mobility because they fail to appreciate how vast wealth inequality really is. In this blog, we review the study and highlight how Prime Panels helped the author obtain a nationally stratified sample based on wealth, strengthening the study’s findings and generalizability.
By now, even casual users of MTurk have heard about recent concerns of “bots” or low quality data. We’ve written about the topic here and laid out evidence that suggests “bots” are actually foreign workers using tools to obscure their true location (here). Perhaps most importantly, we’ve created two tools to help keep these workers out of your studies. In this blog, we introduce a third tool: the Universal Exclude List.
- Since early August, researchers have worried that “bots” are contaminating data collected on MTurk.
- We found workers who submit HITs from suspicious geolocations are using server farms to hide their true location.
- When using TurkPrime tools to block workers from server farms, we collected high quality data from MTurk workers.
- We also collected data from workers who use server farms to learn more about them.
- Our evidence suggests recent data quality problems are tied to foreign workers, not bots.
In this blog, we review recent data quality issues on Mechanical Turk and report the results of a study we conducted to investigate the problem.
high quality data