Most social science research relies on convenience sampling of participants, meaning few samples look like let alone represent the general population. For many research questions, convenience samples are not a problem. Yet, for other questions, being able to capture and represent the opinions of people from different groups is essential. Because most researchers do not routinely gather these kinds of samples, knowing where to find one when it’s needed can be difficult. Using TurkPrime you can easily and affordably obtain a sample matched to the demographics of the US census on our market research platform called Prime Panels.
Sampling by Age: Why do it?
People of different ages vary greatly in their beliefs and behaviors. For example, a recent Pew report outlines wide generational gaps in people’s opinions on several political issues like presidential job approval, perceptions of racism, views on immigration, and political ideology (Pew Research Center, 2018). Furthermore, some issues, like the use of Medicare, depend on age and therefore are more relevant to older adults than younger ones. For researchers who study such questions, not being able to recruit enough older participants may decrease the generalizability of their findings. While considering the age of participants is less common in social research than other areas of research, researchers who seek to recruit older adults online may find themselves hindered by the small number of older adults available. So, what can researchers do to recruit older adults in online social research?
At TurkPrime, we advocate for requesters to treat workers fairly when posting HITs on Amazon’s Mechanical Turk (MTurk). Workers are, after all, the people who make the research possible. Sometimes situations arise in which an MTurk worker is unable to receive payment, despite having completed a survey. Below are two common scenarios in which a worker may not be paid, despite completing a survey:
Studying pairs of people (e.g., married couples, friends, coworkers, etc) is becoming increasingly commonplace in the social and behavioral sciences. Online participant populations, such as Mechanical Turk and other online panels, can potentially serve as a rich source of dyadic participants. However, conducting dyadic research online also faces multiple challenges that need to be overcome in order to obtain high quality results. This blog post will outline some of the challenges of running dyadic studies online, as well as the ways our MTurk Toolkit can best be used to run a dyadic study, with recommendations for best practices based on our experience. Using the methods outlined in this blog, researchers have been able to successfully run numerous dyadic studies using the MTurk Toolkit.
- We collected high quality data on MTurk when using TurkPrime’s IP address and Geocode-restricting tools.
- Using a novel format for our anchoring manipulation, we found that Turkers are highly attentive, even under taxing conditions.
- After querying the TurkPrime database, we found that farmer activity has significantly decreased over the last month.
- When used the right way, researchers can be confident they are collecting quality data on MTurk.
- We are continuously monitoring and maintaining data quality on MTurk.
- Starting this month, we will be conducting monthly surveys of data quality on Mechanical Turk.
A case study from a recent JESP article
A new study appearing in the Journal of Experimental Social Psychology suggests Americans strongly believe in economic mobility because they fail to appreciate how vast wealth inequality really is. In this blog, we review the study and highlight how Prime Panels helped the author obtain a nationally stratified sample based on wealth, strengthening the study’s findings and generalizability.
By now, even casual users of MTurk have heard about recent concerns of “bots” or low quality data. We’ve written about the topic here and laid out evidence that suggests “bots” are actually foreign workers using tools to obscure their true location (here). Perhaps most importantly, we’ve created two tools to help keep these workers out of your studies. In this blog, we introduce a third tool: the Universal Exclude List.
- Since early August, researchers have worried that “bots” are contaminating data collected on MTurk.
- We found workers who submit HITs from suspicious geolocations are using server farms to hide their true location.
- When using TurkPrime tools to block workers from server farms, we collected high quality data from MTurk workers.
- We also collected data from workers who use server farms to learn more about them.
- Our evidence suggests recent data quality problems are tied to foreign workers, not bots.
In this blog, we review recent data quality issues on Mechanical Turk and report the results of a study we conducted to investigate the problem.
Last week, the research community was struck with concern that “bots” were contaminating data collection on Amazon’s Mechanical Turk (MTurk). We wrote about the issue and conducted our own preliminary investigation into the problem using the TurkPrime database. In this blog, we introduce two new tools TurkPrime is launching to help researchers combat suspicious activity on MTurk and reiterate some of the important takeaways from this conversation so far.
Data quality on online platforms
When researchers collect data online, it’s natural to be concerned about data quality. Participants aren’t in the lab, so researchers can’t see who is taking their survey, what those participants are doing while answering questions, or whether participants are who they say they are. Not knowing is unsettling.