Academic research is a collaborative endeavor. Faculty members work with post-docs, grad students, and undergrads. Sometimes one lab collaborates with another. During the course of such work, resources sometimes need to be shared or redistributed. At TurkPrime, we have sought to make part of this sharing easier by allowing researchers to transfer funds from one user’s lab balance to another. In this blog, we demonstrate how to use this feature.
One reason Amazon Mechanical Turk has become so popular among researchers is the speed with which data can be collected. Compared to more traditional research methods—lab-based experiments, field studies, ethnographic interviews, etc—MTurk is exceptionally fast, making it possible to collect data for an entire study within a day or sometimes just a few hours. Although MTurk’s speed is nice, there are times when collecting data all at once can actually be a problem. In this blog, we explain how to spread your data collection out across time and why you might want to do so.
In this blog, we highlight some subtle and not so subtle aspects of the TurkPrime Dashboard you can use to make navigation and completing study-related tasks easier.
Three weeks ago, we published a blog explaining five things you should be doing in your online data collection. In this blog, we follow up with five things you should NOT be doing when collecting data on MTurk.
What is a Survey Group?
Survey Groups are one of the most powerful and dynamic tools on TurkPrime for controlling which workers are eligible and ineligible for your study. A Survey Group is exactly what it sounds like: a collection of surveys or studies you have grouped together. Survey Groups are useful when you want to ensure your studies have unique workers. This may be a set of studies investigating the same topic or multiple studies being run in your lab at the same time for which you want no overlap in participants.
A new year represents the opportunity to consider priorities, set goals, chart new courses of action, and decide how to move forward in the coming months. At TurkPrime, we’re moving into 2019 looking for ways to expand the tools and services we offer to researchers. In addition to several initiatives we’re already working on, we want to hear from you about the tools and features that can make your research easier. To this end, we’re announcing the launch of an online Suggestion Box.
Researchers are responsible for being an expert, or at least knowledgeable, in several areas. There’s the topic of your research, the methods common within your discipline, best practices for open science, and the mediums used to communicate about your work—just to name a few. For many researchers, online data collection has been revolutionary, helping collect data faster and more affordably than ever before. Yet, with the emergence of online research there is now one more domain to be an expert in. Given the steep learning curve for really learning how to best run online studies, we put together this blog to highlight five practices that if you’re not already doing in your online research, you should be. These practices primarily apply to online research on Amazon’s Mechanical Turk when using TurkPrime’s MTurk Toolkit, but some practices can be applied to other platforms as well.
In this blog, we explain everything you could ever want to know about including and excluding participants from studies while using TurkPrime. In last week’s blog on longitudinal studies, we described our Include Workers feature, but this blog digs into the nitty-gritty and explains what our features are, when you might want to use them, and how they work.
In this blog, we describe how to run a longitudinal study on MTurk, using TurkPrime. We also provide tips for maximizing worker participation and minimizing attrition.
- Amid the Bot Scare on MTurk in the summer of 2018, researchers reported that bad data often came from respondents linked to repeated geolocations.
- However, a deeper understanding of geolocations suggests there is little reason to believe that repeated geolocations are inherently tied to bad data quality.
- We describe the difference between repeated geolocations that come from server farms and those that do not and we test the quality of data from the top 200 repeated geolocations not tied to server farms.
- Repeated geolocations that are not tied to server farms were a source of high quality data and comparable to data obtained from non-repeated geolocations.
- Based on these results, we believe duplicate geolocations are not inherently problematic. Further, we are adjusting the default setting of our Block Duplicate Geolocations Feature to “OFF” and adding a pop-up to inform researchers about the consequences of using this tool.