722
Views
1
CrossRef citations to date
0
Altmetric
Commentary

Tackling deceptive responding during eligibility via content-knowledge questionnaires

ORCID Icon & ORCID Icon
Pages 141-142 | Received 04 Jan 2020, Accepted 04 Jan 2020, Published online: 21 Jan 2020
This article is referred to by:
Utilizing content-knowledge questionnaires to assess study eligibility and detect deceptive responding

Deceptive responding from participants has troubled social research for years. This problem becomes particularly conspicuous when the topic under analysis involves relatively private issues, such as sexual behavior or drug use, or when there are incentives, monetary or otherwise, to participate. Illustrating this problem, an intriguing study surveyed 100 participants who had regularly served as volunteers in clinical trials and found that 25% to 33% had either exaggerated or fabricated symptoms to facilitate their participation and, perhaps more worrisome, 75% had concealed information likely to result in their exclusion from the study (Citation1). A recent review claims that overall deception rate among healthy volunteers ranges from 3% to 25% (Citation2). Almost every piece of literature on deceptive responding indicates the need to improve the screening and eligibility techniques to avoid misrepresentation of own’s behavior (Citation3,Citation4). Clinical studies on drug use can employ objective verification (Citation5), yet toxicological measurements can be very expensive and sometimes they are just not feasible. Example of the latter are studies that employ online research (Citation6).

The paper by Strickland and Stoops “Utilizing content-knowledge questionnaires to assess study eligibility and detect deceptive responding” in this issue of the American Journal on Drug and Alcohol Abuse (Citation7) is particularly relevant to improving deceptive responding. This paper puts forward a procedure to improve eligibility when using a crowdsourced sampling. This type of sampling involves the use of online data collection services such as Amazon Mechanical Turk (MTurk), StudyResponse, or Qualtrics Panels. Researchers can upload custom-made research forms to these services and draw participants from a large and, in principle, diverse population (Citation8,Citation9).

Strickland and Stoops describe a novel method for enrollment criteria assessment, particularly useful for online research or situations when objective measurement of drug use is not possible. Specifically, the researchers recruited approximately 4000 volunteers via MTurk and requested that participants complete – in the context of a larger study on decision-making – a screening survey that included self-reporting of cannabis use and a newly designed Cannabis Knowledge Questionnaire (CKQ). Typically, researchers use only the drug use self-report to decide whether or not a given participant is eligible. The CKQ, which represents the major methodological innovation of the study, included a battery of questions on cannabis jargon, paraphernalia associated with its use, and costs and weight of cannabis products, among other questions related to cannabis use culture. The authors then correlated the CKQ responses with the individual’s self-report of cannabis use history. A major strength of the study was the assessment, in a separate sample of participants, of the correspondence of CKQ scores with a urine toxicology screening for cannabis use.

The CKQ showed good internal consistency and, most importantly, the likelihood of positively answering the questions largely and significantly increased as a function of self-reported drug use. Moreover, in the laboratory study, the number of correct responses to the questionnaire was significantly higher in those that tested positive for cannabis in the urine drug test, compared to those that tested negative. It seems, therefore, that the inclusion of knowledge-based questionnaires is a valuable, simple, and economical tool to detect, and therefore reduce, deceptive responding in addiction research. The study adds to prior work (Citation10) indicating that “two-step” procedures are effective to reduce eligibility problems and response inconsistencies. It is important to remark that, to further facilitate the use of CKQ, the authors provide the instrument as supplementary material of the paper, and describe specificity, sensitivity and predictive values for its individual questions and cutoff points.

As with all research, however, the contributions of the paper need to be discussed in the context of limitations. The use of mTurk biases the sample toward college students and people under 50 years old (Citation11), which detracts from the generality of the results. A related issue is the lack of control of the conditions in which the CKQ was answered. The authors acknowledge that participants may have conducted web searches while responding. Perhaps more importantly, the main assumption of the paper is that a good CKQ score will be more likely to be exhibited by someone who is using the substance vs. someone who did not – or rarely – used it. The implication is, however, dependent on the nature of the questions. Questions that ask about drug-related paraphernalia or drug-related culture may be more likely to select eligible participants who are more attuned or connected to the social groups that use the drug. Yet, drug users that are isolated from current trends may be less likely to answer the questions correctly. The authors addressed this problem in the discussion of the paper and suggested that future knowledge-based questionnaires should try to avoid culture-based questions and, instead, focus on questions that inquire on direct substance use (e.g., costs) or substance effects.

Despite these caveats, the study by Strickland and Stoops (Citation7) represents substantial progress toward improving the quality of the eligibility process. In the absence of objective measures, eligibility often relies on a reduced set of questions that are not further corroborated or, after data collection, are slowly and painstakefully combed for errors, inconsistencies, and other signs of response fabrication. The study provides, likely for the first time, evidence that content-knowledge questionnaires are useful to assess study inclusion and exclusion criteria. Ultimately, subsequent work should refine the questionnaire and its implementation.

References

  • Devine EG, Waters ME, Putnam M, Surprise C, O’Malley K, Richambault C, Fishman RL, Knapp CM, Patterson EH, Sarid-Segal O, et al. Concealment and fabrication by experienced research subjects. Clin Trials. 2013;10:935–48.
  • Lee CP, Holmes T, Neri E, Kushida CA. Deception in clinical trials and its impact on recruitment and adherence of study participants. Contemp Clin Trials. 2018;72:146–57. doi:10.1016/j.cct.2018.08.002.
  • McCaul ME, Wand GS. Detecting deception in our research participants: are your participants who you think they are? Alcohol Clin Exp Res. 2018;42:230–37. doi:10.1111/acer.13556.
  • Resnik DB, McCann DJ. Deception by research participants. N Engl J Med. 2015;373:1192–93. doi:10.1056/NEJMp1506985.
  • Apseloff G, Ashton HM, Friedman H, Gerber N. The importance of measuring cotinine levels to identify smokers in clinical trials. Clin Pharmacol Ther. 1994;56:460–62. doi:10.1038/clpt.1994.161.
  • Strickland JC, Alcorn JL 3rd, Stoops WW. Using behavioral economic variables to predict future alcohol use in a crowdsourced sample. J Psychopharmacol. 2019;33:779–90. doi:10.1177/0269881119827800.
  • Strickland JC, Stoops WW. Utilizing content-knowledge questionnaires to assess study eligibility and detect deceptive responding. Am J Drug Alcohol Abuse. 2019;1–9. doi:10.1080/00952990.2019.1689990.
  • Buhrmester M, Kwang T, Gosling SD. Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci. 2011;6:3–5. doi:10.1177/1745691610393980.
  • Pamer J, Strickland J. A beginner’s guide to crowdsourcing. Strengths, limitations and best practices for psychological research. Psychol Sci Agenda [Internet]. 2016;2019. Available from: https://www.apa.org/science/about/psa/2016/06/changing-minds
  • Hydock C. Assessing and overcoming participant dishonesty in online data collection. Behav Res Methods. 2018;50:1563–67. doi:10.3758/s13428-017-0984-5.
  • Walters K, Christakis DA, Wright DR. Are mechanical turk worker samples representative of health status and health behaviors in the U.S.? PLoS One. 2018;13:e0198835. doi:10.1371/journal.pone.0198835.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.