402
Views
0
CrossRef citations to date
0
Altmetric
Applications of Assessment

Learning From Mistakes: Impact of Careless Responses on Counseling Research Using Amazon’s Mechanical Turk

&

References

  • Arditte, K. A., Çek, D., Shaw, A. M., & Timpano, K. R. (2016). The importance of assessing clinical phenomena in Mechanical Turk research. Psychological Assessment, 28(6), 684–691. https://doi.org/10.1037/pas0000217
  • Arias, V. B., Garrido, L. E., Jenaro, C., Martínez-Molina, A., & Arias, B. (2020). A little garbage in, lots of garbage out: Assessing the impact of careless responding in personality survey data. Behavior Research Methods, 52(6), 2489–2505. https://doi.org/10.3758/s13428-020-01401-8
  • Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20(3), 351–368. https://doi.org/10.1093/pan/mpr057
  • Berry, K., Rana, R., Lockwood, A., Fletcher, L., & Pratt, D. (2019). Factors associated with inattentive responding in online survey research. Personality and Individual Differences, 149, 157–159. https://doi.org/10.1016/j.paid.2019.05.043
  • Brühlmann, F., Petralito, S., Aeschbach, L. F., & Opwis, K. (2020). The quality of data collected online: An investigation of careless responding in a crowdsourced sample. Methods in Psychology, 2, 100022. https://doi.org/10.1016/j.metip.2020.100022
  • Buchanan, E. M., & Scofield, J. E. (2018). Methods to detect low quality data and its implication for psychological research. Behavior Research Methods, 50(6), 2586–2596. https://doi.org/10.3758/s13428-018-1035-6
  • Buhrmester, M. D., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 6(1), 3–5. https://doi.org/10.1037/14805-009
  • Buhrmester, M. D., Talaifar, S., & Gosling, S. D. (2018). An evaluation of Amazon’s Mechanical Turk, its rapid rise, and its effective use. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 13(2), 149–154. https://doi.org/10.1177/1745691617706516
  • Burns, G. N., Christiansen, N. D., Morris, M. B., Periard, D. A., & Coaster, J. A. (2014). Effects of applicant personality on resume evaluations. Journal of Business and Psychology, 29(4), 573–519. https://doi.org/10.1007/s10869-014-9349-6
  • Chambers, S., Nimon, K., & Anthony-McMann, P. (2016). A primer for conducting survey research using MTurk: Tips for the field. International Journal of Adult Vocational Education and Technology, 7(2), 54–73. https://doi.org/10.4018/IJAVET.2016040105
  • Chan, C., & Holosko, M. J. (2016). An overview of the use of Mechanical Turk in behavioral sciences: Implications for social work. Research on Social Work Practice, 26(4), 441–448. https://doi.org/10.1177/1049731515594024
  • Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1), 112–130. https://doi.org/10.3758/s13428-013-0365-7
  • Chandler, J. J., & Paolacci, G. (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8(5), 500–508. https://doi.org/10.1177/1948550617698203
  • Chandler, J., & Shapiro, D. (2016). Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology, 12, 53–81. https://doi.org/10.1146/annurev-clinpsy-021815-093623
  • Cheung, J. H., Burns, D. K., Sinclair, R. R., & Sliter, M. (2017). Amazon Mechanical Turk in organizational psychology: An evaluation and practical recommendations. Journal of Business and Psychology, 32(4), 347–361. https://doi.org/10.1007/s10869-016-9458-5
  • Chmielewski, M., & Kucker, S. C. (2020). An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science, 11(4), 464–473. https://doi.org/10.1177/1948550619875149
  • Clark, S. L., Muthén, B., Kaprio, J., D’Onofrio, B. M., Viken, R., & Rose, R. J. (2013). Models and strategies for factor mixture analysis: An example concerning the structure underlying psychological disorders. Structural Equation Modeling: A Multidisciplinary Journal, 20(4), 681–703. https://doi.org/10.1080/10705511.2013.824786
  • Clifford, S., & Jerit, J. (2014). Is there a cost to convenience? An experimental comparison of data quality in laboratory and online studies. Journal of Experimental Political Science, 1(2), 120–131. https://doi.org/10.1017/xps.2014.5
  • Curran, P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology, 66, 4–19. https://doi.org/10.1016/j.jesp.2015.07.006
  • Dahlem, N. W., Zimet, G. D., & Walker, R. R. (1991). The Multidimensional Scale of Perceived Social Support: A confirmation study. Journal of Clinical Psychology, 47(6), 756–761. https://doi.org/10.1002/1097-4679(199111)47:6<756::AID-JCLP2270470605>3.0.CO;2-L
  • DeRight, J., & Jorgensen, R. S. (2015). I just want my research credit: Frequency of suboptimal effort in a non-clinical healthy undergraduate sample. The Clinical Neuropsychologist, 29(1), 101–117. https://doi.org/10.1080/13854046.2014.989267
  • DeSimone, J. A., Harms, P., & DeSimone, A. J. (2015). Best practice recommendations for data screening. Journal of Organizational Behavior, 36(2), 171–181. https://doi.org/10.1002/job.1962
  • Enochson, K., & Culbertson, J. (2015). Collecting psycholinguistic response time data using Amazon Mechanical Turk. PLoS One, 10(3), e0116946. https://doi.org/10.1371/journal.pone.0116946
  • Fischer, E. H., & Farina, A. (1995). Attitudes toward seeking professional psychological help: A shortened form and considerations for research. Journal of College Student Development, 36(4), 368–373.
  • Fleischer, A., Mead, A. D., & Huang, J. (2015). Inattentive responding in MTurk and other online samples. Industrial and Organizational Psychology, 8(2), 196–202. https://doi.org/10.1017/iop.2015.25
  • Follmer, D. J., Sperling, R. A., & Suen, H. K. (2017). The role of MTurk in education research: Advantages, issues, and future directions. Educational Researcher, 46(6), 329–334. https://doi.org/10.3102/0013189X17725519
  • Ford, J. B. (2017). Amazon’s Mechanical Turk: A comment. Journal of Advertising, 46(1), 156–158. https://doi.org/10.1080/00913367.2016.1277380
  • Goldammer, P., Annen, H., Stöckli, P. L., & Jonas, K. (2020). Careless responding in questionnaire measures: Detection, impact, and remedies. The Leadership Quarterly, 31(4), 101384. https://doi.org/10.1016/j.leaqua.2020.101384
  • Gough, H. G., & Bradley, P. (1996). CPI manual: Third edition. Consulting Psychologists Press.
  • Greenblatt, J. B., Young, S. J., Yang, H. C., Long, T., Beraki, B., Price, S. K., Pratt, S., Willem, H., & Desroches, L. B. (2013). US residential miscellaneous refrigeration products: Results from Amazon Mechanical Turk Surveys [No. LBNL-6537E]. Lawrence Berkeley National Lab (LBNL).
  • Griffith, R. L., & Peterson, M. H. (Eds.). (2006). A closer examination of applicant faking behavior. Information Age Publishing.
  • Harms, P. D., & DeSimone, J. A. (2015). Caution! MTurk workers ahead—Fines doubled. Industrial and Organizational Psychology, 8(2), 183–190. https://doi.org/10.1017/iop.2015.23
  • Hauser, D. J., & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48(1), 400–407. https://doi.org/10.3758/s13428-015-0578-z
  • Huang, J. L., Bowling, N. A., Liu, M., & Li, Y. (2015). Detecting insufficient effort responding with an infrequency scale: Evaluating validity and participant reactions. Journal of Business and Psychology, 30(2), 299–311. https://doi.org/10.1007/s10869-014-9357-6
  • Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., & DeShon, R. P. (2012). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27(1), 99–114. https://doi.org/10.1007/s10869-011-9231-8
  • Huang, J. L., Liu, M., & Bowling, N. A. (2015). Insufficient effort responding: Examining an insidious confound in survey data. The Journal of Applied Psychology, 100(3), 828–845. https://doi.org/10.1037/a0038510
  • Johnson, J. A. (2005). Ascertaining the validity of individual protocols from web-based personality inventories. Journal of Research in Personality, 39(1), 103–129. https://doi.org/10.1016/j.jrp.2004.09.009
  • Kam, C. C. S., & Meyer, J. P. (2015). How careless responding and acquiescence response bias can influence construct dimensionality: The case of job satisfaction. Organizational Research Methods, 18(3), 512–541. https://doi.org/10.1177/1094428115571894
  • Kan, I. P., & Drummey, A. B. (2018). Do imposters threaten data quality? An examination of worker misrepresentation and downstream consequences in Amazon’s Mechanical Turk workforce. Computers in Human Behavior, 83, 243–253. https://doi.org/10.1016/j.chb.2018.02.005
  • Kazai, G., Kamps, J., & Milic-Frayling, N. (2012). The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy [Paper presentation]. Proceedings of the 21st ACM International Conference on Information and Knowledge Management, October (pp. 2583–2586). https://doi.org/10.1145/2396761.2398697
  • Kees, J., Berry, C., Burton, S., & Sheehan, K. (2017). An analysis of data quality: Professional panels, student subject pools, and Amazon’s Mechanical Turk. Journal of Advertising, 46(1), 141–155. https://doi.org/10.1080/00913367.2016.1269304
  • Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., & Winter, N. J. (2020). The shape of and solutions to the MTurk quality crisis. Political Science Research and Methods, 8(4), 614–629. https://doi.org/10.1017/psrm.2020.6
  • Kim, H. S., & Hodgins, D. C. (2017). Reliability and validity of data obtained from alcohol, cannabis, and gambling populations on Amazon’s Mechanical Turk. Psychology of Addictive Behaviors, 31(1), 85–94. https://doi.org/10.1037/adb0000219
  • Maniaci, M. R., & Rogge, R. D. (2014). Caring about carelessness: Participant inattention and its effects on research. Journal of Research in Personality, 48, 61–83. https://doi.org/10.1016/j.jrp.2013.09.008
  • Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437–455. https://doi.org/10.1037/a0028085
  • Mullen, P. R., Fox, J., Goshorn, J. R., & Warraich, L. K. (2021). Crowdsourcing for online samples in counseling research. Journal of Counseling & Development, 99(2), 221–226. https://doi.org/10.1002/jcad.12369
  • Oh, S., & Shillingford-Butler, A. (2021). The Client Assessment of Multicultural Competent Behavior (CAMCB): Development and validation. Measurement and Evaluation in Counseling and Development, 54(2), 71–89. https://doi.org/10.1080/07481756.2020.1745651
  • Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45(4), 867–872. https://doi.org/10.1016/j.jesp.2009.03.009
  • Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
  • Peer, E., Vosgerau, J., & Acquisti, A. (2014). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46(4), 1023–1031. https://doi.org/10.3758/s13428-013-0434-y
  • Poynton, T. A., DeFouw, E. R., & Morizio, L. J. (2019). A systematic review of online response rates in four counseling journals. Journal of Counseling & Development, 97(1), 33–42. https://doi.org/10.1002/jcad.12233
  • Schneider, S., May, M., & Stone, A. A. (2018). Careless responding in internet-based quality of life assessments. Quality of Life Research, 27(4), 1077–1088. https://doi.org/10.1007/s11136-017-1767-2
  • Shapiro, D. N., Chandler, J., & Mueller, P. A. (2013). Using Mechanical Turk to study clinical populations. Clinical Psychological Science, 1(2), 213–220. https://doi.org/10.1177/2167702612469015
  • Shaw, A. D., Horton, J. J., Chen, D. L. (2011, March). Designing incentives for inexpert human raters. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (pp. 275–284). https://doi.org/10.1145/1958824.1958865
  • Smedema, S. M. (2017). Amazon Mechanical Turk as a rehabilitation research participant recruitment tool. Journal of Rehabilitation, 83(4), 3–12.
  • Smith, S. M., Roster, C. A., Golden, L. L., & Albaum, G. S. (2016). A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel to MTurk samples. Journal of Business Research, 69(8), 3139–3148. https://doi.org/10.1016/j.jbusres.2015.12.002
  • Strickland, J. C., & Stoops, W. W. (2019). The use of crowdsourcing in addiction science research: Amazon Mechanical Turk. Experimental and Clinical Psychopharmacology, 27(1), 1–18. https://doi-org.arktos.nyit.edu/101037/pha0000235
  • Stucky, B. D., Gottfredson, N. C., Panter, A. T., Daye, C. E., Allen, W. R., & Wightman, L. F. (2011). An item factor analysis and item response theory-based revision of the Everyday Discrimination Scale. Cultural Diversity & Ethnic Minority Psychology, 17(2), 175–185. https://doi.org/10.1037/a0023356
  • Sylaska, K., & Mayer, J. D. (2019). It’s 2019: Do we need super attention check items to conduct web-based survey research? The evolution of MTurk survey respondents [Paper presentation]. Presented at the Association for Research in Personality, Grand Rapids, MI, June 28.
  • Vannette, D. (2017, June 29). Using attention checks in your surveys may harm data quality. https://www.qualtrics.com/blog/using-attention-checksin-your-surveys-may-harm-data-quality/
  • Vogel, D. L., Wade, N. G., & Ascheman, P. L. (2009). Measuring perceptions of stigmatization by others for seeking psychological help: Reliability and validity of a new stigma scale with college students. Journal of Counseling Psychology, 56(2), 301–308. https://doi.org/10.1037/a0014903
  • Vogel, D. L., Wade, N. G., & Haake, S. (2006). Measuring the self-stigma associated with seeking psychological help. Journal of Counseling Psychology, 53(3), 325–337. https://doi.org/10.1037/0022-0167.53.3.325
  • Wang, J., & Wang, X. (2019). Structural equation modeling: Applications using Mplus (2nd ed.). Wiley.
  • Weijters, B., Baumgartner, H., & Schillewaert, N. (2013). Reversed item bias: An integrative model. Psychological Methods, 18(3), 320–334. https://doi.org/10.1037/a0032121
  • Wood, D., Harms, P. D., Lowman, G. H., & DeSimone, J. A. (2017). Response speed and response consistency as mutually validating indicators of data quality in online samples. Social Psychological and Personality Science, 8(4), 454–464. https://doi.org/10.1177/1948550617703168
  • Woods, C. M. (2006). Careless responding to reverse-worded items: Implications for confirmatory factor analysis. Journal of Psychopathology and Behavioral Assessment, 28(3), 186–194. https://doi.org/10.1007/s10862-005-9004-7
  • Wright, K. B. (2005). Researching Internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. Journal of Computer-Mediated Communication, 10(3), JCMC1034. https://doi.org/10.1111/j.1083-6101.2005.tb00259.x

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.