861
Views
3
CrossRef citations to date
0
Altmetric
ORIGINAL ARTICLE

Crowdsourcing participants for psychological research in Australia: A test of Microworkers

&
Pages 39-47 | Received 29 Apr 2015, Accepted 19 Nov 2015, Published online: 20 Nov 2020

Abstract

Objective

Australian researchers interested in studying psychological phenomena using Australian samples have a limited range of reliable sampling options, often limited to undergraduate participant pools and convenience samples subject to well‐known limitations. To expand the range of sampling options available, we attempted to validate the crowdsourcing platform, Microworkers, as a viable tool for collecting data from Australian participants.

Method

Across two studies, 122 Australian participants were recruited via Microworkers to complete a demographic survey (Studies 1 and 2), personality questionnaire (Study 2), and a standard decision‐making task designed to elicit a framing effect (Study 2).

Results

Providing a first indication of the viability of Microworkers as a recruitment platform for Australian participants by Australian researchers, we were successful in acquiring our desired sample size. Moreover, the recruited Microworkers samples were demographically diverse (in a similar fashion to Internet samples in general), and produced valid psychological data.

Conclusion

Overall, these results provide promising preliminary evidence for Microworkers as a viable platform for the recruitment of Australian participants for psychological research, and for Australian researchers interested in crowdsourced participants more generally.

Key points

  • Outside of undergraduate and convenience samples, Australian researchers and researchers interested in studying Australian populations have long faced a shortage of well‐validated recruitment platforms.

  • One of the most popular crowdsourcing platforms for the recruitment of research participants, Amazon.com's Mechanical Turk, is not a viable option for Australian researchers or for recruitment of Australian participants.

  • The present research is the first to test the viability of the Microworkers platform as a solution to this lacuna.

  • Two studies were successful in recruiting the target sample sizes via Microworkers.

  • The recruited Microworkers samples were demographically diverse and in many ways paralleled Australian census figures and further produced valid psychological data.

  • These findings suggest that Microworkers may offer a promising alternative for researchers wishing to recruit Australian participants, and Australian researchers wishing to recruit crowdsourced participants more generally.

Sampling is a critical issue faced by researchers studying human subjects and is arguably the most important determinant of the cost, reliability, and generalisability of research findings. In the last two decades, psychology has witnessed two monumental and closely related developments in this facet of the research process, promising unprecedented improvements in the speed, scale, affordability, and diversity of the samples researchers can recruit. These two developments are first, the use of the Internet in general, and second, the use of crowdsourcing services in particular (Gosling & Mason, Citation2015). In this article, we briefly trace the history of these developments, with an emphasis on their current limitations. We then present a preliminary demonstration of a relatively under‐utilised crowdsourcing platform called Microworkers (see Hirth, Hoßfeld, & Tran‐Gia, Citation2011), highlighting the way in which it may overcome the limitations of existing services. In particular, our findings point to Microworkers as a valuable tool for crowdsourced data recruitment in Australia.

A (very) brief history of Internet and crowdsourced research

Since the experimental revolution in the mid‐twentieth century, the history of psychology has been dominated by the use of undergraduate research participant pools as its primary data source (Sears, Citation1986). While the availability of such a large, affordable pool has permitted innumerable advances in psychology (and other fields studying human behaviour; Kam, Wilking, & Zechmeister, Citation2007), it has been rightly criticised for its lack of diversity. Put simply, the lack of demographic and cultural diversity arising from reliance on Western undergraduate participant pools will, in most (but not all) circumstances, lead to a severely partial understanding of human psychology (Arnett, Citation2008; Henrich, Heine, & Norenzayan, Citation2010; Sears, Citation1986).

Although Internet sampling has been employed since the late 1990s (see Gosling & Mason, Citation2015 for a review), it has only recently begun to challenge the convenience and efficiency of undergraduate participant pools. Internet sampling was at first relatively rare: an early review by Skitka and Sargis (Citation2006) found that only 1.6% of all articles published in APA journals in 2003 and 2004 used Internet samples, including a mere four articles in the Journal of Personality and Social Psychology. Now however, Internet studies have become commonplace: in the last twelve months (November 2014–October 2015), every single issue of JPSP has included at least one article using Internet samples.

What caused the proliferation of Internet‐based studies? Originally the subject of great scepticism, web‐based studies have now been convincingly shown to be comparable to equivalent lab studies in many domains (Germine et al., Citation2012; Gosling, Vazire, Srivastava, & John, Citation2004; McGraw, Tew, & Williams, Citation2000). Furthermore, software and hardware advances are permitting increasingly complex web‐based studies, including collecting precise reaction times or facilitating live participant interaction (e.g., Garaizar, Vadillo, & López‐de‐Ipiña, Citation2014).

Beyond mere comparability with conventional methods, however, Internet sampling possesses unique advantages. Most obvious is the potential efficiency and scale of Internet studies: unlike in‐person testing, Internet studies circumvent the constraints of limited testing space and time, and (relatively) small participant pools. Without these limitations, one can conceivably test thousands of people with minimal investment of time and money (e.g., Nosek, Hawkins, & Frazier, Citation2011; Xu, Nosek, & Greenwald, Citation2014). Additionally, compared with undergraduate samples, Internet samples are more demographically diverse (Gosling et al., Citation2004). Moreover, while such samples may not be substantially more diverse than undergraduate samples on some attributes (e.g., ethnicity), the ability to recruit larger samples means that relatively underrepresented groups (e.g., males over 50 years old belonging to an ethnic minority) can, in absolute terms, be recruited in sufficient numbers required for statistical analyses.

Although the Internet does facilitate rapid, affordable recruitment of large samples, delivering a study via the Internet does not in and of itself guarantee this. Indeed, many of the most successful Internet‐based projects owe much of their success to the presence of incentives or publicity. The use of the Internet simply enabled this success. Studies such as Project Implicit (Nosek et al., Citation2011), for example, offered participants feedback on their beliefs and attitudes, fulfilling participants' epistemic motivations. Moreover, because of its initial popularity, it further benefitted from receiving news coverage (e.g., Cox, Citation2009). However, most internet‐based studies do not receive such attention.

It is in this context that the use of crowdsourcing services, especially Amazon's Mechanical Turk (AMT), has proliferated. Given the wide range of existing research describing the various characteristics of AMT (e.g., Buhrmester, Kwang, & Gosling, Citation2011; Casler, Bickel, & Hackett, Citation2013; Mason & Suri, Citation2012; Paolacci & Chandler, Citation2014), we keep our own description brief. Essentially, AMT is a platform via which a large pool of workers performs simple tasks for employers in exchange for small amounts of financial compensation. Initially, AMT was intended as a service to allow employers to efficiently perform tasks such as tagging the content of images: tasks that cannot easily be performed by artificial intelligence. Recently, however, AMT has come to the attention of researchers as an inexpensive source of research participants.

Limitations of AMT

Crowdsourcing with AMT has, to an extent, democratised the research process by providing access to a large participant pool for researchers facing geographical or financial challenges that otherwise make conventional data collection methods unviable (Johnson & Borden, Citation2012). There are, however, limitations on who has access to AMT both as a worker (participant) and requester (researcher). We address these in turn.

While AMT workers reportedly reside all over the world, workers are overwhelmingly concentrated in two countries: the USA and India. Indeed, the only validation research reported to date for specific populations is based on US and Indian AMT users (Litman, Robinson, & Rosenzweig, Citation2014). One systemic reason for this over‐representation is a stipulation in the terms of service for AMT users that current workers can only be compensated via a US or Indian bank account, or via the Amazon store. Moreover, non‐US residents have recently been restricted from creating worker accounts, limiting the non‐US AMT worker population to pre‐existing users.Footnote1

Much like workers, requesters also face barriers to accessing AMT, as requester accounts are currently restricted to users with US billing addresses and US social security or employer identification numbers. This has effectively limited the use of AMT to requesters who either are themselves from the US, or who have access to an account held in the name of someone from the US.

To summarise, AMT is an incredibly powerful research tool for those who have access to it. However, it is quite limited both in terms of which researchers can use it, and from which countries participants can be recruited.

Microworkers as an alternative platform to AMT

While AMT is by far the most widely recognised and frequently used crowdsourcing platform for researchers, other platforms do exist. Among these is Microworkers, which shares many features with AMT, such as the capacity to recruit workers for essentially any task that can be completed online and the ability to limit the availability of tasks to residents of particular countries. Further, preliminary research suggests that Microworkers is not subject to some of the limitations outlined above for AMT (Hirth et al., Citation2011). Most notably for our purposes, Microworkers places no geographical restrictions upon workers or employers, and consequently has far greater diversity in the location of both workers and employers. Thus, Microworkers could serve as a crowdsourcing platform for which one could recruit sufficiently large samples from non‐US countries. It is this possibility that we explored in the present studies.

It is our belief that identifying a crowdsourcing platform that can be utilized by Australian researchers to recruit international samples, or by international (and hence Australian) researchers to recruit Australian samples, carries great value. The importance of the former is obvious, given the lack of viable recruitment options for Australian researchers. One reason for importance of the latter is that, despite sharing many cultural and societal features with other countries, Australia has features that give rise to specific research questions (e.g., concerning multiculturalism; Feather, Citation2005; Fraser & Islam, Citation2000; Sibley & Barlow, Citation2009; Spry & Hornsey, Citation2007). An essential requirement for pursuing these regionally specific interests is the existence of infrastructure for recruiting Australian participants.

Overview of the present studies

Below, we report data collected from three separate samples collected for two studies. Sample 1 was collected for Study 1, an initial proof‐of‐concept study in which we attempted to establish the viability of recruiting Australians from Microworkers by a team of Australian researchers. We next recruited Sample 2 for Study 2, in which we explored Australian Microworkers users' motivations for using the service, as well as whether they would produce data consistent with commonly observed effects in the psychological literature. While largely successful, the results from Sample 2 produced findings that warranted additional data collection (discussed below); hence we recruited Sample 3 for inclusion in Study 2.

Method

Participants

For both studies (and all three samples), we recruited participants via Microworkers.com by launching a single ‘campaign’ (analogous to a ‘HIT’ on AMT), requesting a specific number of ‘positions’ (analogous to ‘assignments’), each to be filled by one participant. Participants were compensated US$1.50 for completion of the survey, paid into their Microworkers accounts. More information on the structure of the Microworkers platform appears in the Supporting Information (including Tables S1 and S2). Specific characteristics of each sample are detailed below. This research was approved by the Human Research Ethics Advisory Panel at University of New South Wales (UNSW) Australia.

Procedure

Study 1 (sample 1)

Upon accepting a position, participants were redirected from Microworkers to an external survey programmed in and hosted by Qualtrics. Participants reported their gender, highest level of education, age, ancestry,Footnote2 religious affiliation, employment status, marital status, current household income, country of birth, and country of residence. If participants reported living in Australia, they indicated which state or territory. Participants born outside Australia reported which year they arrived in Australia. Participants indicated their citizenship or visa status in Australia, if they were enrolled to vote in Australia, and which (if any) political party they affiliate with. Participants reported if they had any children (and if so, how many) and what region they live in (urban, suburban, rural). Participants rated their political orientation on social and economic issues on separate 7‐point scales ranging from extreme left (liberal/progressive) to extreme right (conservative).

Participants in Study 1 also provided information on their history as a Microworkers user, including estimates of how long they had been using the site for and how much money they typically earned on Microworkers per fortnight. Participants estimated how successful their Microworkers submissions had been and how many of their completed Microworkers jobs had been related to academic research.

Finally, to reach the end of the study and receive a completion code that allowed compensation on Microworkers, all participants had to successfully complete a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). This served to ensure that the survey was not being completed by an automated system, or ‘bot’.

Study 2 (samples 2 and 3)

Participants completed an identical procedure to Study 1 in addition to reporting the extent to which several statements described their reasons for completing jobs on Microworkers reported in detail below.Footnote3 Items were adapted from Buhrmester et al. (Citation2011). Additionally, participants also completed the 44‐item Big Five Inventory (BFI; John & Srivastava, Citation1999). The BFI assesses the five personality facets of openness, conscientiousness, extraversion, agreeableness, and neuroticism.

Participants also completed the Asian Disease Problem (Tversky & Kahneman, Citation1981) adapted to an Australian context, for comparison with previous AMT validation work (e.g., Paolacci, Chandler, & Ipeirotis, Citation2010). Participants were asked to imagine that Australia was facing the impending outbreak of an unusual disease that was expected to kill 600 people. Participants would be choosing between two programs designed to combat the disease. One program represented a riskier option, and the other, a certain option. Participants were randomly allocated to one of two conditions: a gain frame condition in which participants chose between options phrased in terms of lives saved (i.e., 200 saved vs a 1/3 chance of saving all), and a loss frame condition in which participants chose between options phrased in terms of lives lost (i.e., 400 lost vs a 2/3 chance of losing all). Importantly, both options had identical expected pay‐offs of 400 lives lost and 200 saved. Typically, respondents are more willing to pick the riskier option when options were negatively framed (lives lost) compared with positively framed (lives saved; Tversky & Kahneman, Citation1981).

Results

In the sections below, we first provide an overview of the data collection process that may be useful for considering the logistics of data collection with Microworkers. Second, we provide a summary of the demographic characteristics of our samples, providing illustrative comparisons to population data. Third, we present data on participants' Microworkers usage history (Studies 1 and 2) and motivations for using Microworkers (Study 2). Finally, we present psychological data (both questionnaire and experimental) to demonstrate that Microworkers can be used to collect valid psychological data (Study 2).

Data collection characteristics

As a first indication of data quality, all participants successfully completed the CAPTCHA, and had unique IP addresses and user IDs, suggesting that no participants completed the study more than once. Further, all but two participants had IP addresses located in Australia, suggesting that Microworkers' geographic eligibility restrictions perform very well.

A summary of data collection characteristics for each sample is presented in Table . A few results warrant discussion. Comparing the target sample size to the number of people who actually completed each study, there is small but obvious variation from sample to sample. As with AMT, Microworkers users submit a researcher‐generated completion code in order to receive compensation. Thus, participants will occasionally submit a completion code without having actually completed the study (resulting in a smaller actual sample), or complete the study without submitting the completion code for compensation (resulting in a larger actual sample).

Table 1. Summary of data collection characteristics by sample

Another result warranting attention is the presence of participants under 18 years of age. We did not expect to recruit anyone under 18, as the Microworkers Terms of Service stipulate that members must be at least 18 years old. The relatively large number of minors in Sample 2 was one of the motivating factors behind recruiting Sample 3. That is, we sought to obtain a more precise estimate of the prevalence of minors. Based on the three samples, it appears that approximately 9% of Microworkers users are minors (though given the variability from sample to sample, more data would be desirable in order to obtain a more stable estimate). In the results reported below, we excluded data from participants under the age of 18 given that, for ethical reasons, researchers would typically not deliberately recruit minors in an unsupervised, online environment.

Another important aspect of Table is the duration of data collection. While AMT is known for rapid data collection from large numbers of participants, our Microworkers campaigns attracted, on average, two participants per day, with the data collection rate across all three sampling time windows (see Figure S1 for details). Given that the two websites claim to have total workforces of similar size (as of October 2015, 700,000 for Microworkers vs 500,000 for AMT), we believe the relatively slower completion time to be largely a result of Australia's smaller population relative to that of the USA.

Demographic characteristics

For brevity, we report demographic characteristics aggregated across samples for participants at least 18 years of age (N = 111). Disaggregated data are available upon request. Participants were aged between 18 and 62 (M = 28.14, SD = 9.23). Regarding political orientation, participants were on average slightly left leaning for social and economic political conservatism, but showed a considerable degree of variability (social: M = 3.59, SD = 1.51; economic: M = 3.65, SD = 1.45).

Percentages by category for other demographic variables appear in Table , residence and ethnicity characteristics in Table , and religious and political characteristics in Table . As can be seen in these tables, our samples exhibited substantial demographic diversity. In some respects, the sample compares favourably with recent census figures (Australian Bureau of Statistics, Citation2013) that report: a female population of 50.6%; state of residence percentages of 32.5%, 25.1%, and 20.3% for New South Wales, Victoria, and Queensland, respectively; 69.8% of the population born in Australia. In other respects, our sample was somewhat unrepresentative compared with census figures: for example, 14.3% of the Australian population holding a university degree; and 59.7% of the population engaged in full‐time work. Comparison between these figures and those from our samples reveal that Australian Microworkers samples may over‐represent individuals with tertiary degrees and under‐represent individuals engaged in full‐time work. Overall however, our data do show a similar pattern to that of Internet studies in general (Gosling & Mason, Citation2015): while not entirely representative of the population, they are nonetheless more diverse than commonly used undergraduate and convenience samples.

Table 2. Percentages for basic demographic variables

Table 3. Percentages for residence and ethnicity variables

Table 4. Percentages for religious and political variables

User history and motivations

As can be seen in Table , participants were largely motivated by external financial incentives, however, few saw Microworkers as their primary source of income (for this item, no one selected the maximum response, and only three selected above the midpoint). Additionally, non‐financial, internal motivations such as killing time were also highly motivating.

Table 5. Descriptive statistics for Microworkers user motivations from Study 2

With regards to participants' Microworkers usage history, participants were relatively new and infrequent users. Participants reported having been users for between 0 and 4 years (Median = 0.10), and having previously completed between 0 and 2,500 jobs (Median = 4). For participants who reported previously completing at least one Microworkers job (N = 95), participants reported high approval rates (M = 81%, SD = 34%, Median = 100%). The sample reported an estimated weekly Microworkers income (in Australian dollars) of between $0 and $200 (M = $10.06, SD = $27.52, Median = $2.00).

Interestingly, participants reported only a small percentage of their previous work as having involved academic studies (42% reported having never completed academic research on Microworkers; M = 21%, SD = 34%, Median = 2%). In stark contrast, recent research on AMT suggests that academic research constitutes a large proportion of the jobs completed on AMT, with some users effectively working as professional research participants (Chandler, Mueller, & Paolacci, Citation2014). This suggests that (for now at least) Microworkers may be a platform where non‐naïveté of the participant pool may not be as concerning an issue, compared with AMT (see Chandler et al., Citation2014 for a discussion).

Psychological characteristics

Means, standard deviations, and subscale reliabilities for the Big‐5 facets appear in Table . Although comparable Australian datasets using the BFI are scarce, the distribution of scores closely matched those of an Australian undergraduate sample (Smillie & Jackson, Citation2006), also presented in Table for comparison. The largest mean difference in facet scores was observed for extraversion, with the Microworkers sample reporting lower extraversion than Smillie and Jackson's student sample. The former finding mirrors previous research showing AMT samples to be less extraverted than undergraduate student samples (Goodman, Cryder, & Cheema, Citation2013). With regards to measurement reliability, the Microworkers sample produced lower alphas for all BFI subscales compared with those observed by Smillie and Jackson (Citation2006), except for agreeableness. As can be seen in Table , however, even if lower than typically desired, alphas in our sample were still within the range commonly observed with brief, broadband personality scales (Murray et al., Citation2009).

Table 6. Big Five facet descriptive statistics and reliabilities from Study 2 and Smillie and Jackson (Citation2006)

Table reports the pattern of risky choices observed in Study 2 for the Asian Disease Problem. For comparison, we also present corresponding findings from two previous AMT studies that deployed this task (Berinsky, Huber, & Lenz, Citation2012; Paolacci et al., Citation2010). While the framing manipulation in our study produced only a marginally significant effect (χ2(1) = 3.12, p = .077, Φ = .19), it is directionally consistent with existing studies, and is within the range observed in the Many Labs Replication Project (Klein et al., Citation2014).

Table 7. Risky choice proportions for Asian Disease problem for Study 2 and comparable studies

Additional analyses of data quality indicators are presented in the Supporting Information.

Discussion

The aim of this research was to examine the viability of Microworkers as a crowdsourcing platform for the recruitment of Australian samples and for use by Australian researchers. To our knowledge, we recruited the first‐ever fully Australian samples from a crowdsourcing platform. Across two studies and three samples, we observed promising results regarding the data quality, demographic diversity, and psychological characteristics of Australian Microworkers users.

As with other online sampling methods validated in other countries, our Australian Microworkers participants were more demographically diverse than undergraduate and convenience samples, but, unsurprisingly, less diverse than the population at large. Participants reported a mix of internal and external motives for using Microworkers, were relatively new to the platform, and had completed few academic studies. Participants displayed personality profiles mirroring previous findings in AMT samples (i.e., with crowdsourced participants scoring lower on extraversion than non‐Internet samples; Goodman et al., Citation2013). Responses to the Asian Disease Problem also revealed patterns qualitatively similar, albeit weaker, to those observed in AMT samples and the general population.

Our results provide an encouraging indication that data collected with Microworkers are of comparable quality to that collected via other sources, including AMT, which is currently limited in its accessibility for Australian participants and researchers alike. However, there is at least one way in which Microworkers falls short of AMT: the speed of data collection. While large American AMT samples can be collected in the course of hours (Mason & Suri, Citation2012), our experience with Microworkers as a platform to recruit Australian samples was considerably slower. In the face of a lacuna of crowdsourcing options for researchers seeking Australian samples, however, we feel that this slowed rate is a relatively small drawback.

One might wonder about the size of the Australian participant pool on Microworkers. We are unable to address this point conclusively, however, we can point to the fact that there was no decline in data collection rate between Sample 1 and Sample 3, which were both the same size and took the same amount of time to collect. Moreover within each sample, the data collection rate remained constant. This suggests that we were not pushing up against the limits of the Australian user population on Microworkers. Even if only 0.33% of the 700,000 total registered Microworkers users were Australian residents, reflecting Australia's proportion of the world population, this would still produce a pool of 2,310 workers (see Table S3 for further discussion). Such a figure would rival Australian undergraduate participant pools, and would be the same order of magnitude as the average total AMT participant pool sampled by the average researcher (Stewart et al., Citation2015). Future research recruiting larger sample sizes over a longer period of time will further clarify this question.

One important pragmatic question not addressed by the present findings concerns compensation rate. Across our three samples, factors such as payment size and task duration (and thus compensation rate) were essentially constant. As such, we cannot offer comparisons with AMT regarding the minimum cost of sampling or the effect of compensation rate on data collection times and data quality (e.g., Buhrmester et al., Citation2011). This is an obvious direction for future research.

One must also consider the ethics of crowdsourcing. Much of the work completed on these platforms is compensated below minimum wage (Fort, Adda, & Cohen, Citation2011).Footnote4 On platforms such as Microworkers and AMT, workers are often vulnerable to exploitation such that employers may choose to reject properly completed work without compensation (however concerns about the prevalence of such practices may be overstated; Horton, Citation2011). Of course, too high of a rate might tip in the other direction towards incentives that are disproportionately alluring, which might disrupt the intended voluntary nature of informed consent. As the use of crowdsourcing platforms becomes more widespread, researchers, universities, and funding bodies alike must engage in ethical considerations surrounding compensation rate.

Our findings also have implications for researchers interested in conducting cross‐cultural research. Hirth et al. (Citation2011) accessed data directly from Microworkers (rather than via a campaign) and reported ten most represented countries of workers: Indonesia, Bangladesh, India, USA, Philippines, Romania, Egypt, Nepal, Pakistan, and Poland. Our findings suggest that Australia is also represented in the Microworker user database. This geographical diversity suggests that Microworkers may be a valuable sampling tool for tackling the critical challenge of broadening the empirical base of psychological research (Arnett, Citation2008; Henrich et al., Citation2010; Sears, Citation1986). Further, access by researchers around the world broadens the scope of international teams of researchers that can collaborate in such efforts.

In conclusion, these findings represent a promising demonstration of Microworkers as an online crowdsourcing platform for collecting diverse and psychologically valid data. Outside of undergraduate and convenience samples, Australian researchers (particularly those interested in studying Australian populations) have long faced a shortage of well‐validated recruitment platforms. Unfortunately, due to limitations on Amazon Mechanical Turk worker and employer account creation, that most popular resource is not a viable recruitment option. Microworkers, however, is easily accessible to both Australian researchers and Australian participants. Our experience suggests that Microworkers may offer a promising alternative for researchers wishing to recruit Australian participants, and Australian researchers wishing to recruit crowdsourced participants more generally. It is our hope that this proof‐of‐concept research inspires researchers to make use of this untapped resource.

Supplemental material

raup_a_12098827_sm0001.docx

Download MS Word (149 KB)

Notes

1. In mid‐2014, we attempted the present study reported in this manuscript with an AMT sample limited to Australian residents. We terminated data collection after only one participant completed the study over a period of seven days.

2. For brevity, breakdown of country‐based ancestry is not reported. These data are available upon request.

3. We attempted to collect user motivation data in Study 1, however due to a programming error, participants were unable to submit responses to these questions. This technical issue likely contributed to the relatively higher attrition rate and larger upper bound on the interquartile range of completion time for Study 1 relative to Study 2, in which this issue was fixed.

4. Across our studies, the compensation rate was around US$10 per hour, which is slightly below Australian minimum wage, however considerably greater than that usually paid to AMT workers, and is comparable to rates offered to research participants taking part in laboratory‐based psychology studies at UNSW.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.