126
Views
0
CrossRef citations to date
0
Altmetric
Research Article

The participant’s voice: crowdsourced and undergraduate participants’ views toward ethics consent guidelines

ORCID Icon & ORCID Icon

ABSTRACT

The informed consent process presents challenges for psychological trauma research (e.g. Institutional Review Board [IRB] apprehension). While previous research documents researcher and IRB-member perspectives on these challenges, participant views remain absent. Thus, using a mixed-methods approach, we investigated participant views on consent guidelines in two convenience samples: crowdsourced (N = 268) and undergraduate (N = 265) participants. We also examined whether trauma-exposure influenced participant views. Overall, participants were satisfied with current guidelines, providing minor feedback and ethical reminders for researchers. Moreover, participant views for consent were similar irrespective of trauma-exposure. Our study has implications for IRBs and psychological researchers.

As researchers and clinicians within psychology, we know the importance of informed consent practices. Such practices aim to show respect toward participants as individuals and maintain their sense of autonomy; it is critical that participants can make an educated and voluntary decision about whether research participation is suitable for them (British Psychological Society, Citation2021; National Health and Medical Research Council, Citation2018). However, the informed consent process presents several challenges to psychological researchers (e.g., Burgess, Citation2007). For instance, we know that participants seldom read consent forms (e.g., Perrault & Nazione, Citation2016; Ripley et al., Citation2018), leading to issues with comprehension (e.g., Geier et al., Citation2021; Mann, Citation1994; Perrault & McCullock, Citation2019), and uncertainty about whether participants are truly “informed” (e.g., Varnhagen et al., Citation2005). Consider then psychologically sensitive areas of research, like trauma research, where additional consent-related challenges exist because of Institutional Review Board (IRB) concerns (e.g., Jaffe et al., Citation2015; Newman et al., Citation2006). For example, IRBs may mandate more severe risk information in consent forms than is clinically indicated—potentially leading to over-warning participants—or may question whether research with trauma-exposed populations (i.e., people who have experienced a traumatic event) should even occur (e.g., Abu-Rus et al., Citation2019; Becker-Blease & Freyd, Citation2006; Newman et al., Citation2006; Yeater & Miller, Citation2014). To address such challenges, researchers have examined how people react to participation in trauma-related research (Carlson et al., Citation2003; Jaffe et al., Citation2015; Legerski & Bunnell, Citation2010), provided recommendations to improve ethical guidelines (e.g., using a trauma-informed approach; Campbell et al., Citation2019; Cook & Hoas, Citation2011), and gathered feedback generally from IRBs (e.g., Rothstein & Phuong, Citation2007). Yet what remains absent is participants’ views on current ethical processes (including consent processes). Thus, we aimed to address this overarching issue here, alongside our secondary interest in whether participant preferences for consent differ based on their prior exposure to a traumatic event. Hereon we refer to “trauma-exposed” and “non-trauma-exposed” participants in line with Criterion-A for posttraumatic stress disorder (PTSD) diagnosis in the Diagnostic and Statistical Manual of Mental Disorders-IV (American Psychiatric Association, Citation2013).

One concern that some IRBs and researchers have is that trauma-related research—i.e., research involving participants who have experienced traumatic events, that asks participants about those events, and/or that involves exposing participants to analogue trauma—is riskier than other types of psychological research (see Abu-Rus et al., Citation2019; Becker-Blease & Freyd, Citation2006; Cromer et al., Citation2006; DePrince & Freyd, Citation2004; Mathews et al., Citation2022; Newman et al., Citation2006; Yeater & Miller, Citation2014 for discussion), because it might cause—or further worsen existing—psychological harm (e.g., Jaffe et al., Citation2015; Newman et al., Citation2006). Indeed, previous research has documented fears that this type of research may increase participants’ negative mood, re-traumatize them, and/or worsen their posttraumatic stress symptoms, possibly leading to psychologically “shattering” participants (Cromer et al., Citation2006; Jaffe et al., Citation2015; Newman, Citation2008; Newman et al., Citation2006). A second concern is whether participants—particularly those who are trauma-exposed—are even able to make an informed decision to participate in trauma-related research (Becker-Blease & Freyd, Citation2006; Du Mont & Stermac, Citation1996; Newman & Kaloupek, Citation2009; Newman et al., Citation2006; see also Fontes, Citation2004 for a nuanced discussion about people experiencing intimate partner violence, which is beyond the scope of the current paper). At the most extreme, people who participate in trauma-related research are considered a “vulnerable” population, in line with populations inherently afforded special ethical precautions, including children (e.g., Newman & Kaloupek, Citation2009; Newman et al., Citation2006; Yeater & Miller, Citation2014).

Yet, a growing body of literature suggests that these concerns about trauma-related research are unfounded. First, the risk to participants in trauma-related research may not be greater than for other types of psychological research (e.g., Jorm et al., Citation2007; Yeater & Miller, Citation2014). For example, in Cromer et al. (Citation2006) first study, participants reported no significant difference in distress after answering questions about emotional and sexual abuse, relative to questions about body image and SAT/GPA scores. Other researchers have found likewise: participants who answered questionnaires related to trauma and sexual experiences, and participants who completed cognitive exercises (e.g., IQ tests) reported similarly low levels of negative emotion during participation (Yeater et al., Citation2012). In fact, most research indicates participants tolerate trauma-related research: many participants report low-to-moderate distress and moderate-to-high benefits (see meta-analysis of N = 73,959 by Jaffe et al., Citation2015). Even participants with PTSD or prior trauma-exposure who report somewhat elevated distress also report significant benefits to the research, alongside little-to-no regret regarding participation (Jaffe et al., Citation2015; Mathews et al., Citation2022; Newman & Kaloupek, Citation2009). In fact, these participants typically report research benefits outweigh costs to participation (Edwards et al., Citation2009; Kassam-Adams & Newman, Citation2005; McClinton Appollis et al., Citation2015; Newman & Kaloupek, Citation2009).

Second, experts within the field generally agree—despite suggestions from some IRBs—that trauma-exposed participants have the capacity to make an informed decision regarding participation; that is, trauma-exposure does not impair a person’s ability to make such decisions (Collogan et al., Citation2004; Hebenstreit & DePrince, Citation2012; Newman & Kaloupek, Citation2009; Newman et al., Citation2006; Ruzek & Zatzick, Citation2000). Indeed, prior research finds that participant coercion (partially operationalized via participant’s understanding of the consent form) is minimally indicated and unrelated to PTSD status (Jaffe et al., Citation2015). In summary, extant research suggests there is no inherent need to treat people with prior trauma-exposure as an ethically defined “vulnerable” population.

Existing literature on ethical issues in research has mostly collected researcher, past IRB-member, and ethicist perspectives. For instance, prior literature documents how participants react to trauma-related research (e.g., Jaffe et al., Citation2015) and researchers’ experiences with participants and consent (Xu et al., Citation2020). Moreover, extant literature reports IRB-member perspectives—such as the importance they place on different ethical issues arising in ethics applications (e.g., informing participants about risks; Allison et al., Citation2008; Rothstein & Phuong, Citation2007). Several prior papers also feature researcher and/or ethicist commentary on ethical issues in consent (e.g., Becker-Blease & Freyd, Citation2006; Haverkamp, Citation2005; Wells & Kaptchuk, Citation2012), such as applying a trauma-informed care perspective to ethical guidelines (Campbell et al., Citation2019).

The limited literature that examines what participants think of consent has done so with researcher/IRB intent. For instance, researchers have asked participants how to improve consent forms (e.g., bolding information), with the purpose of increasing consent form readability (e.g., Perrault & Keating, Citation2018; Perrault & Nazione, Citation2016). Similar research investigates why participants choose not to read and/or skim consent forms (e.g., Douglas et al., Citation2021; Geier et al., Citation2021; Perrault & Nazione, Citation2016), highlighting personal characteristics—like “pure laziness”—as a possible explanation (Perrault & Keating, Citation2018). Other research has focused on participant expectations within the researcher-participant relationship (e.g., participant’s obligations to the researcher, such as cooperation; Epstein et al., Citation1973; Singer, Citation1984) or on participants’ decision-making process during consent (e.g., when do people make the decision to consent to participate), including what specific consent information participants want (Cook & Hoas, Citation2011); though we note the latter study was not specific to the psychological trauma research context.

Taken together, extant literature provides evidence and expert opinions from researchers, prior IRB-members, and ethicists regarding ethical issues within psychological research, including consent issues in trauma-related research. We also have some understanding of what participants think of specific ethical considerations, including consent form presentation. But we do not know how participants—arguably the key stakeholders—evaluate ethical consent guidelines and practices. Another way to think about this issue is: Are the recommendations based on current guidelines (e.g., from British Psychological Society, Citation2021; National Health and Medical Research Council, Citation2018; Public Welfare, Citation2018) and empirical evidence that we use what participants want for consent? At a basic level, are participants aware (i.e., knowledgeable) of the consent information they should currently receive? Essentially, how well are current guidelines currently serving participants? Here, to address these questions, we examined participants’ general understanding (i.e., knowledge) of, and expectations (i.e., preferences) for, consent practices. Across two studies, we sampled two commonly used samples: US crowdsourced and Australian undergraduate participants.Footnote1 Given some IRB apprehension toward trauma-related research, including trauma-exposed participants (e.g., Newman et al., Citation2006), we had a secondary interest in whether trauma-exposed participants differed from non-trauma-exposed participants in their consent views and preferences. Finally, we had a broader interest in understanding our commonly used sample types (e.g., prior study completion experience, why they choose to participate).

STUDY 1

Method

The Flinders Human Research Ethics Committee (4759) approved this study. We report all measures, conditions, and data exclusions. We pre-registered this study (https://osf.io/gjryt), as well as Study 2 (https://osf.io/undxz); the data files and all available supplementary files are available at: Study 1: https://osf.io/gnwq4/; Study 2: https://osf.io/ru8e5/.

Participants

Because correlations stabilize as they approach N = 260 (Schönbrodt & Perugini, Citation2013, Citation2018), we aimed to collect 260 participants. We based this decision on wanting to run internal consistency analyzes since many of the questionnaires used here were created specifically for our study.

Using Amazon’s Mechanical Turk (MTurk), we collected 272 participants. In line with our pre-registered exclusion criteria, we excluded three participants for responding incorrectly to the cultural check question and one participant for failing all three attention checks (see Moeck et al., Citation2022); we also excluded one participant because they submitted inappropriate open-ended responses (i.e., containing extensive profanity that did not answer our survey questions) that rendered their data unusable. Thus, our final sample of 268 participants were 56.3% women (men: 42.5%, non-binary: 0.4%, prefer not to say: 0.7%), aged 19–76 (M = 39.91, SD = 11.68). Most participants were Caucasian (67%; Black: 12.4%, Mixed: 6%, Asian: 4.9%, Hispanic: 4.1%, Filipino: 1.5%, Native American: 1.1%, Latino: 0.7%, and Italian, Middle Eastern, Other [e.g., “unknown”], Caribbean, African, Chinese: 0.4% respectively). Their highest level of education was most often a Bachelor’s degree (44.8%; high school/equivalent: 23.9%, associate degree/diploma or certificate: 19.4%, Master’s degree: 9.7%, Doctoral studies: 1.5%, and primary school: 0.7%). On average, participants reported having completed 14,055.7 (SD = 32,237.30; median = 5,000 with strong positive skew; n = 248) studies on crowdsourcing platforms (e.g., MTurk); 54% of the sample reported having completed between 0 and 5000 studies. Participants reported having completed such studies for an average of 3.73 years (SD = 2.86; n = 257), 3.33 months (SD = 2.79; n = 257), and 5.04 days (SD = 8.18; n = 245). Participants reported spending, on average, 3.94 hours (SD = 4.77) per day, over 6.24 days (SD = 9.60; n = 267) in a week, completing online studies. Finally, participants perceived themselves as very experienced (M = 5.05, SD = 1.09).

Materials and measures

Demographic Information and MTurk Experience

We collected participants’ basic demographic information: age, self-reported ethnicity, gender, and highest level of education (indexed to the American education system). Additionally, we collected information about participants’ prior online study completion (e.g., “Approximately how many online studies have you completed on crowdsourcing platforms?,” “Approximately how many days/hours per week/day you spend completing online studies?”) and experience (e.g., “How would you rate your experience of completing online studies on the following scale?,” where 0 = not very experienced, 3 = some experience, and 6 = very experienced) of completing online studies.

Factors Affecting Decision to Participate (Adapted from Cook & Hoas, Citation2011)

To understand factors that may influence participants’ decision to participate in psychological research, we asked them to read five statements (e.g., “I believe I’m contributing to science”) and rate to what extent they agreed or disagreed (where 0 = strongly disagree and 6 = strongly agree) with these statements. We also included an “other” option where participants could input a reason that influences their decision to participate; participants who entered a reason also rated to what extent they agreed or disagreed with the reason.

Pre-Existing Knowledge of Consent

To examine participants’ preexisting knowledge of consent practices, we administered two types of questions. First, we presented participants with three broad but related, open-text response questions (i.e., “According to the current ethics guidelines, [1] what information do you know must be provided about psychological research studies before participating? [2] what do you know your rights are as a participant? [3] what do you know you can do if you have concerns about psychological research studies?”). We administered these questions first to avoid providing participants with specific information through the questions (i.e., about aspects of consent including study purpose, discussion of risks etc).

Second, because we were specifically interested in participants’ preexisting knowledge of specific consent domains—namely risks presented at consent—we developed 15 consent-related statements for the purpose of this study (e.g., “The consent form should provide me with sufficient information and adequate understanding of the research study to make a voluntary decision about participating”) and asked participants to rate the extent to which they believed these statements were true or false (where 0 = definitely false, 3 = neither true nor false, and 6 = definitely true). We developed these statements based on consent guidelines from the British Psychological Society (British Psychological Society, Citation2021), Australian National Health and Medical Research Council (National Health and Medical Research Council, Citation2018) and American Public Welfare (Citation2018) Act. Of course, consent guidelines vary between the UK, Australia, and the US, as does the way that IRBs within and between these countries interpret these guidelines. Thus, we developed items that synthesized the key information across these guidelines. For example, we focused on critical areas of consent (e.g., relating to voluntariness, participant’s rights, risks) that were prominent in all three guidelines, in addition to recommended areas of information for consent (e.g., incentives, sources of funding, community benefits etc.), that were only sometimes present in all three guidelines and/or guidelines had differences in their recommendations. Therefore, in line with our scale anchors, if participants were knowledgeable about general ethical guidelines, they should rate most of these statements between “neither true nor false” and “definitely true”.

Our final 15 statements (current study: α = .86, Study 2: α = .82) formed nine consent components (see for specific statements within each consent component): voluntariness, purpose, methods, participant’s rights, benefits, risks, incentives, and declarations of interest.

Table 1. Descriptive statistics for pre-existing knowledge and preferences measures.

Participant Preferences for Consent

To examine participants’ preferences for informed consent, we asked them to rate how strongly they agreed or disagreed (where 0 = strongly disagree, 3 = neither agree nor disagree, and 6 = strongly agree) with 17 statements (e.g., “I expect to be informed about the study’s purpose). Again, we developed these statements based on British (British Psychological Society, Citation2021), Australian (National Health and Medical Research Council, Citation2018), and US Public Welfare (Citation2018) consent guidelines. The final 17 statements (current study: α = .82, Study 2: α = .78) mapped onto to the same nine consent components as the pre-existing knowledge statements; see . We also asked participants to reflect on the statements they responded to here and describe anything they would like to change – either add in or take away – from these consent guidelines.

Criterion-A Trauma Question (American Psychological Association [APA], 2013)

For participants who consented to answering this single-item question,Footnote2 we asked them to think of their most traumatic or stressful event and whether if, during this event, they were exposed to death, actual or threatened injury, or actual or threatened sexual violence, in any of the following way(s): a) direct exposure, b) witnessing the trauma, c) learning that a relative or close friend was exposed to a trauma, d) indirect exposure to aversive details of the trauma, usually in the course of professional duties (e.g., first responders; i.e., Criterion-A for PTSD in the DSM-5; American Psychiatric Association, Citation2013).

Procedure

To reduce the chance of bots/server farms completing our survey, participants first completed a Captcha screen, cultural check question, and had to obtain greater than 80% on an English Proficiency Test (see https://osf.io/gjryt for precautions outlined in full, alongside Moeck et al., Citation2022). Next, participants completed informed consent procedures, demographics, factors affecting decisions to participate, general preexisting knowledge, guideline specific preexisting knowledge, and guideline specific preferences questions.Footnote3 In line with ethics requirements, we then presented participants with a new consent form that included details about the Criterion-A trauma question. If participants consented, they viewed the Criterion-A trauma question (Study 1: n = 232; Study 2: n = 241) and if they did not consent (Study 1: did not consent: n = 33, did not respond to question: n = 3; Study 2: did not consent: n = 15, did not respond to question: n = 9), they proceeded to debriefing procedures. Participants were compensated with $1.50 (USD) and debriefed in full.

Statistical overview

We ran most of our analyzes using SPSS 28, using null-hypothesis significance testing (NHST). Per our pre-registration, we also ran Bayes Factors using JASP (Version 0.15). For these analyzes, we used Cauchy default priors (0.707) and followed Wetzels et al. (Citation2011) guidelines for interpretation. Our strategy remained the same for Study 2.

Thematic analysis

Per our pre-registration, we initially used NVivo to identify broad themes present in our data. After review, we developed codes specific to each of our four open-ended preexisting knowledge and desired change questions via an inductive approach (Braun & Clarke, Citation2006). We applied the codes developed via NVivo 13 (2020, R1) to our data and refined the codes where required (e.g., where codes needed more specificity). We again recoded the data using the refined codes and measured inter-rater reliability between our two coders (NS; OM; interrater reliability range: 75% − 99%). Coders met to work through discrepancies. Our analysis strategy remained the same for Study 2.

Results

Why do people participate in psychological research studies?

On average, participants strongly agreed that financial compensation (M = 5.04, SD = 1.15) influenced their decision to participate, followed by finding the studies interesting (M = 4.72, SD = 1.16), contributing to science (M = 4.51, SD = 1.24), and thinking the studies are a good use of their time (M = 4.41, SD = 1.30); participants indicated that they neither agreed nor disagreed (i.e., the midpoint) with feeling like the studies will help their own mental health (M = 3.04, SD = 1.95).Footnote4 Pairwise comparisonsFootnote5—see Table S1 for results of all potential pairwise comparisons in full— confirmed that participants strongly agreed financial compensation influenced their decision to participate, more so than contributing to science (p < .001, d = 0.44, 95% CIs [0.23, 0.83]), helping their own mental health (p < .001, d = 1.25, [1.57, 2.43]), finding the studies interesting (p = .025, d = 0.28, [0.02, 0.62]), or feeling the studies are a good use of time (p < .001, d = 0.51, [0.01, 0.40]). In line with our pre-registration, we re-ran our analyzes with only participants who consented to the Criterion-A trauma question (n = 232). The comparison between financial compensation and finding the studies interesting was no longer statistically significant, p = .063, d = 0.26, 95% CIs [−0.01, 0.59]; all other results were unchanged.

Participants also stated other reasons that influence their decision to participate, including: that studies helped them learn something new about either themselves/something else (e.g., psychology; 27.1%; M = 5.54, SD = 0.66) or to pass the time/cure boredom (16.7%; M = 5.25, SD = 0.89), that they liked helping researchers and/or others (14.6%; M = 5.00, SD = 1.00), for fun/entertainment (14.6%; M = 4.86, SD = 1.46), or for a reason covered by our existing items (e.g., financial gain, interesting, contribution to science; 12.5%; M = 5.83, SD = 0.41), skill-building (e.g., typing, cognitive abilities; 6.3%; M = 6.00, SD = 0.00), mental challenge (4.2%; M = 5.50, SD = 0.71), to share opinion (2.1%; M = 6.00, SD = 0.00), and for study advancement (i.e., helps people get invited to larger surveys; 2.1%; M = 6.00, SD = 0.00).

Overall, our findings document novel reasons that MTurk workers choose to participate in psychological research studies. Here, participants strongly endorsed financial compensation and finding the studies interesting as reasons for participation. These reasons somewhat differ from prior trauma-related research investigating women’s experiences with intimate partner violence, that found participant’s main reasons for participation were “I was curious” and “To help others” (Hebenstreit & DePrince, Citation2012). Of course, one of the key features of MTurk is that workers can complete tasks for “money” and are considered part of a “24×7 workforce” (Amazon Mechanical Turk, Citation2018). Thus, it is perhaps no surprise that MTurk workers endorsed this reason the most and that as such they differ from specific trauma samples.

Our results also contribute to existing research examining MTurk worker characteristics. Our findings lend support to the idea that MTurk workers likely approach Human Intelligence Tasks (HITs) based on compensation—as Chilton et al. (Citation2010) also indicated—rather than content. In the context of trauma-related research (i.e., here, responding to a Criterion-A question), our results also point toward the idea that MTurk workers engage with trauma-related research because they also find it interesting, in addition to for compensation purposes.

What do participants already know about consent practices?

We first turn to the results of our thematic analysis of participants’ responses to the four open-ended questions (see for code themes, examples, and frequencies).

Table 2. Thematic coding tables with examples and frequencies for crowdsourced participants.

Q1: According to Current Ethics Guidelines, What Information Do You Know You Must Be Provided About Psychological Studies Before Participating?

Overall, some participants reported knowing that, prior to consenting to participate, they should receive information related to: risks (39.6%), researchers and IRBs (including contact information; 31%), confidentiality (29.9%), study’s purpose (23.5%), method (20.9%), rights (19%), and compensation (17.9%). To a lesser extent, participants reported knowing they should also receive information regarding: research outputs (i.e., what will be “done” with the research; 14.9%), informed consent (10.8%), benefits (10.1%), and data storage (9.7%). Few participants reported that they should receive information related to study demands (e.g., exclusion criteria; 2.6%), contact information for mental health services (2.2%), and funding of the research (0.7%). A further 7.1% of participants responded that they were unsure what information they should receive; 11.2% of responses were unclear and were therefore not coded into a relevant theme.

Q2: … What Do You Know Your Rights Are As a Participant?

Approximately half of our participants indicated that they had the right to withdraw from participation in a study (50%); 26.1% of participants agreed with the idea that they knew what their rights were as a participant, while 14.6% of participants reported being unsure of their rights. Some participants reported knowing they could withdraw their data from a study (10.8%), had the right to informed consent (e.g., via viewing a consent form; 10.8%), contact information (for researchers and/or IRBs; 10.8%), and confidentiality (16%). Few participants reported to knowing that they had the right to not feel coerced etc. during the consent process (i.e., voluntariness; 2.6%), that they should receive method-related information (3.4%), and relevant risk information (2.2%). Interestingly, a few participants reported information that was inconsistent with their rights (e.g., remaining anonymous in all studies when such information may vary study-to-study; 4.9%) and in violation of their rights (e.g., believing they have no rights; 2.6%). Finally, 1.9% of responses were unclear and therefore not coded.

Q3: … What Do You Know You Can Do if You Have Concerns About Psychological Research Studies?

Many participants reported knowing they could use the contact information provided for researchers and/or IRBs if they had any concerns about psychological research studies (72.4%); some participants also reported agreeing with the idea that they knew what to do (17.2%). Other participants reported contacting the employer (i.e., MTurk; 4.1%) or actions other than contacting those already mentioned (e.g., not participating; 13.8%). A few people reported they were unsure (6%) and some provided unclear responses to the question (3%).

Together, these results have three overarching implications. First, participants have some basic understanding of what information they should receive prior to participating in psychological research. However, given less than half of participants recalled critical consent components (e.g., methods, risks), baseline knowledge about consent appears low in crowdsourced participants. Perhaps some preexisting knowledge, combined with the choice to focus on self-relevant consent information (e.g., method and risk information; Douglas et al., Citation2021), is sufficient for participants to believe they have engaged with informed consent. Second, participants have a moderate understanding of their participation rights (i.e., knowing they can withdraw if required). Concerningly though, participants’ knowledge about other participant rights (e.g., confidentiality) is minimal, and several participants expressed they do not know what their rights are. Given our sample perceives themselves—on average—as highly experienced in completing psychological research studies, it is problematic from an ethical standpoint that participants’ knowledge of their participation rights is not more comprehensive. Third, most participants know they can reach out to relevant researchers and/or IRBs if they have concerns about a study, which is promising. Additionally, some participants reported other avenues of action, including withdrawing from the research study, if they had concerns. However, it is unclear how often participants express concern directly to researchers and/or toward IRBs. While we imagine these rates are low given research continues—and is usually monitored by IRBs—establishing whether participants reach out and how satisfied they are with this process is an important future direction. Particularly when here, we found some anecdotal reports of participants being unable to reach researchers and/or IRBs or believing that contact information is usually fake since they do not hear back from people. Though, we note, some of these participant reports could be conflated with market research given responses were sometimes vague (e.g., referred to IRB but not as being linked to a university).

Pre-Existing Knowledge Statements

Next, we consider participants’ responses to our 15 consent-related statements rated on a true/false agreement scale. Descriptive statistics, along with consent statements listed in full, appear in . Overall, participants showed good understanding of critical consent guidelines: they rated six ethics statements centering around voluntariness, participant rights, risks, and confidentiality as “definitely true” (i.e., > 5, where 6 = definitely true), and six statements focusing on methods, risks, confidentiality, and incentives as somewhat true (i.e., > 4).

Participants demonstrated less knowledge in terms of one area of consent: they rated knowing the study’s purpose as “neither true nor false” (i.e., > 3). They also rated two statements—relating to benefits (i.e., community benefits) and declarations of interest consent components—as “somewhat false” (> 2, where 0 = definitely false).Footnote6 However, only the Australian research guidelines include these specific consent components (the UK and US guidelines do not), and even then, the Australian guidelines suggest that this information should be outlined to participants, but it generally “ … should be kept distinct from … ” critical consent information that may impact a participant’s voluntary decision to participate (e.g., sufficient information about purpose, methods, participant rights, risks etc.) (National Health and Medical Research Council, Citation2018, pp. 16–17). Therefore, these aspects of consent information may be lesser known to participants or less salient in consent forms.

Our results for preexisting knowledge statements contrast our findings for preexisting knowledge open-text questions. Specifically, our statement data indicates that crowdsourced participants have a better understanding of critical consent components (e.g., voluntariness, participant rights etc.) than the open-text response data did. Taking data from both measurement types together, we consider crowdsourced participant’s preexisting knowledge of consent practices low-to-moderate.

What are participants’ consent preferences (i.e., what do they expect from IRBs and researchers)?

Next, we examined participants’ preferences for consent practices (see for descriptive statistics and statements in full). Participants reported strong preferences (i.e., > 5, where 6 = strongly agree) in favor of eight consent statements, including consent components such as: voluntariness, methods, participant’s rights, confidentiality, and incentives; and somewhat strong preferences (i.e., > 4, “somewhat agreed”) for three ethics statements, across the methods, risks, and confidentiality components. Participants indicated that they neither agreed nor disagreed— that is, a neutral preference (> 3)—with two consent statements across the purpose and benefits components.

Moreover, participants indicated somewhat strong (i.e., > 2, where 0 = strongly disagree) disagreement with three consent statements, across the methods, benefits, and declarations of interest components; and strong disagreement with one statement related to risks; specifically, participants indicated that having risks listed on the consent form is important to them.Footnote7 Thus, in terms of risk, our results suggest that participants fall somewhere between wanting all possible risks listed and having no risks listed on the consent form. Our finding likely reflects individual variability for preferences regarding risk information; for example, we know within a health context that some people avoid health-related information (e.g., if a person considers themselves “healthy,” they may avoid information that causes them to question their “healthy” status and thereby minimize potential anxiety; Brashers, Citation2001; Brashers et al., Citation2002). Thus, some people may prefer having less risk information while others prefer having all possible information to inform their decision.

Regarding crowdsourced participants then, our results suggest that informed consent practices should continue to include critical information, including information related to expected duration of study, reasonably foreseeable risks, confidentiality, participant rights (i.e., withdrawal from participation), and incentives (i.e., whether compensation is available). Importantly, these preferences should come together to endorse voluntariness (i.e., providing participants with enough information and understanding to make an informed decision), a critical consent component that participants showed a strong preference for. Such preferences for consent are mostly consistent with the areas of consent forms that participants tend to read first (e.g., method and confidentiality; Douglas et al., Citation2020); noting however that prior research was conducted with undergraduates. Surprisingly, crowdsourced participants showed a neutral preference for study purpose and benefit information, yet still indicated they wanted some benefits included at consent. One strategy to provide participants with more information at consent—making them more informed—has been to include example questions in the method section. But here, crowdsourced participants showed a somewhat strong preference against such information being included as part of the consent process. Therefore, such method information (i.e., question examples) is one area we could consider excluding at consent in favor of consent readability and form length (e.g., Albala et al., Citation2010).

Do Participant Consent Preferences Differ Based on Prior Trauma-Exposure?Footnote8

Next, we examined whether participant’s consent preferences differed based on prior trauma-exposure. Here, we could reliably detect effects at d = 0.38. Comparable to prior Criterion-A traumatic event exposure estimates (e.g., Benjet et al., Citation2016; Bridgland & Takarangi, Citation2022; Kilpatrick et al., Citation2013), approximately a third of crowdsourced participants reported trauma-exposure (61.64%; no trauma-exposure = 38.36%).

We ran a series of independent samples t-test and corrected for multiple comparisons (i.e., adjusted statistical significance: p < .003). Across the 17 consent preference statements, our analyzes revealed that preferences did not differ between trauma-exposed and non-trauma-exposed participants, ps: .010–.925, ds: 0.13–0.38 (see for results in full). We found substantial (i.e., BF10 = 6.22) and anecdotal (i.e., BF10 = 1.31) evidence in favor of the alternative hypothesis (i.e., that there is a group difference)—relative to the null hypothesis—for two preference statements across two consent components: methods (i.e., study duration) and voluntariness, respectively. For study duration, on average, trauma-exposed participants more strongly agreed (i.e., “agree” to “strongly agree) with wanting to know the expected duration of the study than non-trauma-exposed participants, who less strongly agreed (i.e., “somewhat agree” to “agree”); a small-to-medium effect size. In terms of voluntariness, on average, trauma-exposed participants had a slightly stronger preference for this voluntariness statement than non-trauma-exposed participants. But both groups still had a strong preference toward the voluntariness statement (i.e., “agree” to “strongly agree”) and the effect size was small. Thus, although our Bayes Factors showed evidence in favor of the alternative hypothesis (i.e., that there is a difference between trauma-exposed groups), both groups reported agreement in a similar direction. For the remaining preference statements, we found substantial evidence (i.e., BF10s: 0.15–0.27), and anecdotal evidence (i.e., BF10s: 0.34–0.82), in favor of the null hypothesis, relative to the alternative.

Table 3. Descriptive and inferential statistics, including Bayes factors, for pre-existing knowledge and consent preference statement group comparisons.

With no multiple comparison correction, three of the preference statements reach traditional significance (i.e., < .05; see ). Two of these statements—expected duration (p = .010) and voluntariness (p = .032)—are the statements reflected in our Bayes Factor results above. A third statement related to study purpose also reached significance (p = .048). We however interpret this result with caution given the p value is close to the “cut off,” the effect size is small, and both groups are positioned somewhere between “neither agree nor disagree” and “somewhat agree.”

Altogether, our results suggest that generally, crowdsourced participants’ consent preferences are similar irrespective of prior trauma-exposure. Importantly, participants’ preferences regarding the communication of risks associated with participation did not seem to differ based on trauma-exposure. Our results suggest trauma-exposed participant (vs. non-trauma-exposed) prefer statements related to voluntariness. This finding underscores the importance of providing trauma-exposed people with adequate information to make an informed decision, showing respect for them as people and supporting their sense of autonomy (e.g., National Health and Medical Research Council, Citation2018; Newman & Kaloupek, Citation2009); two such factors that are often absent during traumatic event exposure. Our finding that trauma-exposed participants have a stronger preference for consent information related to expected study duration likely feeds into the idea of supporting informed decision-making. Hence, these are two areas that may be important to focus on during consent for crowdsourced participants, particularly when approximately three-quarters of our sample reported prior trauma-exposure, i.e., representing the “invisible” trauma-exposed participants of psychology research (Becker-Blease & Freyd, Citation2006; Newman et al., Citation2006).

What Do Participants Want to Change, if Anything, About Current Consent Guidelines?

More than half our participants were currently satisfied with ethical guidelines, i.e., wanted no change (61.9%). Apart from wanting more detail about aspects that should already be part of consent forms (17.2%), some notable change ideas—suggested by a minority of participants— related to more accurate information (e.g., about timeframe of completion; 6.7%), fairer pay (6.3%), and improved risk information (e.g., more obvious risk warnings; 4.1%). Alarmingly, regarding risk information, several participants indicated they had been shown potentially disturbing content (e.g., photos/videos) without being informed of the risks. A few participants requested changes around: presenting consent information (3.0%); enforcing ethical guidelines (3.0%), including information related to deceit (and debriefing procedures; 2.2%), specific data management procedures (1.5%) and making design-specific changes (e.g., not using attention checks during consent; 1.5%); removing parts of the consent form that already exist (1.9%); and including referrals to mental health services (0.4%). Of note, 11.2% of participants requested changes that should already be enacted via current guidelines (e.g., information regarding risks, researcher/IRB contact information, time commitment, information presented in an easy-to-read way) though may not be reflected in consent forms people actually view; 3.4% of participants did not respond to the question, 2.2% of participants provided unclear answers, and 0.7% expressed they were unsure whether the guidelines should change.

Together, our crowdsourced sample provided change suggestions based on issues specific to MTurk. For instance, several participants cited concerns over not being paid enough or even that they should be paid more if the research involves certain tasks (e.g., viewing traumatic content). Yet, for researchers, these concerns present an interesting dilemma since compensation should be proportionate to research requirements (e.g., to compensate travel) and should not be coercive (see National Health and Medical Research Council, Citation2018, p. 17; Public Welfare, Citation2018). Thus, particularly where trauma-related research is concerned on MTurk, increasing pay because participation involves trauma-related content would be coercive; that is, it could encourage participants to take additional risks.

Participants also suggested several improvements that are generally easy for researchers and/or IRBs to implement. These suggestions included providing detailed information regarding data use, researcher/IRB contact details, stating deception may be used, and highlighting risk information (e.g., bolded, underlined). Moreover, some participants reported information consistent with IRBs and/or researchers not engaging in ethical practices during informed consent procedures (e.g., not including IRB information). Relatedly, several participants indicated “changes” that, according to ethical guidelines, should already be enacted by researchers and/or IRBs. Therefore, these findings serve as a reminder to IRBs and researchers to positively engage with the ethical process. Finally, many participants were concerned about the accuracy of advertised participation time estimates. One solution, which some researchers likely already employ, is to pilot surveys for time (e.g., pilot within lab using people naïve to the design, conduct a small online pilot to confirm time). Within some survey programs (e.g., Qualtrics), researchers can also check the median completion time and adjust time estimates accordingly.

STUDY 2

Method

Participants

As in Study 1, we aimed to collect N = 260 (Schönbrodt & Perugini, Citation2013, Citation2018). We collected 272 undergraduate participants using the Flinders University SONA system. However, we excluded seven participants for failing all three attention checks (e.g., Moeck et al., Citation2022), resulting in our final sample of N = 265.

Our sample were mostly women (82.3%; men = 16.2%; other = 0.8%; prefer not to say = 0.8%) aged 17–54 (M = 20.52, SD = 5.84) of Caucasian (or white) ethnicity (50.8%; mixed = 6.4% [e.g., Fiji-Italian], Asian = 2.7%, English = 2.3%, Indian = 1.9%, Japanese = 1.1%, other = 7.9% [e.g., Aboriginal Australian, Italian, Hispanic]; participants also reported their nationality: 25.8% [e.g., Australian]). Our sample’s highest level of education, on average, was high school/equivalent (78.9%; associate degree/diploma or certificate = 12.5%, Bachelor’s degree = 8.3%, primary school = 0.4%). Overall, participants reported completing an average of 5.43 psychological studies (SD = 5.82; n = 258)Footnote9; the length of time (i.e., how long) participants had been completing these psychological studies was most commonly for one year (15.3%), although responses varied as high as three years (3.1%) to as low as < 24 hours (4.6%; see S3 Table for statistics in full). On average, participants perceived their experience in completing psychological studies as somewhat experienced (M = 2.65, SD = 1.51; where 0 = not very experienced, 3 = somewhat experienced, and 6 = very experienced).

Procedure

Most procedural aspects were identical to Study 1. However, we removed questions specific to crowdsourcing platforms (e.g., “Approximately how many online studies have you completed on crowdsourcing platforms?”, Approximately how many days/hours per week/day you spend completing online studies?”). Participants were also awarded credit for their participation (0.5 credits).

Results

Why do people participate in psychological-research studies?

On average, undergraduates somewhat agreed they participated in psychological studies because they found them interesting (M = 4.36, SD = 1.12; on a 7-point scale, where 6 = strongly agree), followed by contributing to science (M = 4.13, SD = 1.14). Participants indicated they neither agreed nor disagreed with participating because they felt the studies were a good use of their time (M = 3.52, SD = 1.36) or that participating helped their own mental health (M = 2.97, SD = 1.45); and participants somewhat disagreed that they participated because the studies benefited them financially (M = 2.11, SD = 1.75), likely because they received course credit. We confirmed—using pairwise comparisons—that participants somewhat agreed that finding studies interesting influenced their decision to participate, more so than contributing to science (p = .046, d = 0.20, 95% CIs [0.002, 0.49]), feeling the studies are a good use of their time (p < .001, d = 0.67, [0.61, 1.00]), help with their mental health (p < .001, d = 1.07, [1.16, 1.67]) or benefit them financially (p < .001, d = 1.53, [1.94, 2.62]; see S2 for comparisons in full). We also repeated these analyzes without participants who did not consent to answering the Criterion-A question (n = 237) and one result changed: the comparison between finding studies interesting and contributing to science was no longer statistically significant, p = .103, d = 0.20, [−0.02, 0.46].

Thirty-eight participants gave other reasons that influence their decision to participate, including: completing studies for credit or as part of course requirements (76.9%; M = 5.83, SD = 0.38), gaining experience or helping them learn more about psychology/psychological research (15.4%; M = 5.40, SD = 0.55), and contributing to psychological science (7.7%; M = 5.33, SD = 0.58).

Here, our results indicate that undergraduates choose to participate in psychological research studies out of interest and because they feel like they are contributing to science (including psychological science). Such reasons fit with the pedagogical experience undergraduate participation seeks to provide (e.g., Boyer Commission on Educating Undergraduates in the Research University, Citation1998; Kilgo et al., Citation2014), and empirical links between undergraduate research participation and university satisfaction (Bowman & Holmes, Citation2018).

Interestingly, we found participants somewhat disagreed with receiving financial compensation as a reason to participate in psychological research. While we did not offer financial compensation in the current study, we know that many on-campus studies do. Further, ~14% of our sample specified the reason for their participation was based on course credit allocation. Thus, although IRBs may be concerned about course credit increasing coercion, our data suggest otherwise. Here however, undergraduates had the option to complete an assignment if unwilling to participate in research, therefore implementing this approach in other undergraduate samples could foster scientific interest in undergraduates (vs. coercion).

What do participants already know about consent practices?

We now turn to the results of our thematic analysis of participants’ responses to our open-ended questions (see for code themes, examples, and frequencies).

Table 4. Thematic coding tables with examples and frequencies for undergraduate participants.

Q1: According to Current Ethics Guidelines, What Information Do You Know You Must Be Provided About Psychological Studies Before Participating?

Approximately half of participants mentioned critical consent components such as: methods (54.3%), study purpose (41.9%), and participant rights (e.g., right to withdraw from study; 34.7%). Other participants reported knowing they must be provided with information regarding: informed consent (31.3%), potential risks (28.3%), confidentiality (e.g., whether data will be deidentified; 17.7%), and potential research outputs (16.2%). Some participants identified that they must be informed: relevant contact information (11.3%), study demands (7.2%), data storage (i.e., how data will be stored; 6.4%), IRB approval (5.7%), and benefits (3.4%). Few participants reported having to know about: contact information for relevant support services (3%), compensation (1.9%), and research funding (0.4%). Some participants reported that they were unsure what information they should receive (1.9%); several responses were unclear and therefore not coded (8.7%).

Q2: … What Do You Know Your Rights Are As a Participant?

Most participants reported that they knew they had the right to withdraw from participation (81.9%). Approximately one quarter of participants indicated that they had the right to confidentiality (29.8%) and informed consent (23.1%). Some participants reported that they had the right to: harm minimization (e.g., to not be harmed; 17%), debriefing information if deception was used (10.6%), voluntariness (e.g., not feel coerced; 10.2%), and to withdraw their data from a study (6.8%). A minority also reported having the right to: know how data will be used (e.g., research outputs; 5.7%), method information (5.3%), contact information for researchers and/or IRBs (3%), and know how their data will be stored (1.1%). Of note, 12.5% of participants reported information inconsistent with their rights (e.g., remaining anonymous) and 1.1% reported information that was directly in violation of their rights (e.g., believing they had no rights or that they could not withdraw from participation). A further 3% of participants indicated they were unsure what their rights were and 1.1% reported agreement with the idea they knew what their rights were. Some responses were unclear and thus not coded (3.8%).

Q3: … What Do You Know You Can Do if You Have Concerns About Psychological Research Studies?

Most participants indicated that they knew they could contact researchers, IRBs, and/or relevant university personnel if they had concerns about a psychological research study (81.1%); participants also reported actions other than contacting relevant people/organizations (e.g., withdrawing from participation; 30.9%). Some participants reported contacting the relevant company/organization as an option if they had concerns about a psychological research study (3.8%), while others reported that they were unsure what to do (7.9%). Some responses were unclear and were not coded (4.2%).

To summarize, our results have three critical implications. First, undergraduates have some understanding of what information they should receive prior to participating. Specifically, basic knowledge was present for some critical consent components (e.g., methods, purpose, and rights), yet lacking in others (e.g., informed consent, risks, confidentiality, research outputs). Second, most undergraduates knew they could withdraw from participation if required, indicating strong understanding of this critical participant right. While some participants cited two other important rights (i.e., confidentiality and informed consent), few participants reported knowing their other rights. In fact, some participants reported information that was inconsistent with their rights, indicating that they may be agreeing to participate in studies without understanding the ramifications for data handling, storage, etc. For example, participants may assume their data is completely anonymous when it could be re-identifiable by researchers and personnel associated with the project. Such a discrepancy in understanding could lead participants to feel deceived or like they cannot trust researchers. Third, most undergraduates know they can contact relevant personnel if they have concerns about psychological research studies. Some participants also indicated useful actions other than contacting relevant people, like withdrawing from the research if they were concerned about a study.

Pre-Existing Knowledge Statements

Next, we examined participant responses to our 15 consent statements; see for descriptive statistics and consent statements listed in full. Overall, undergraduates showed a good understanding of consent guidelines: they rated nine ethics statements as “definitely true” (i.e., > 5, where 6 = definitely true), including consent components such as voluntariness, methods, participant rights, and risks; and four statements as somewhat true (i.e., > 4) for purpose, confidentiality, incentives, and benefits (note M = 3.99 so we include here) components.

Table 5. Descriptive statistics for pre-existing knowledge and preferences measures.

Undergraduates demonstrated less knowledge in relation to benefits: rating one statement as “neither true nor false” (> 3), in addition to declarations of interest: rating one statement as “somewhat false” (> 2, where 0 = definitely false).Footnote10 Similar to our findings in Study 1, these consent guideline statements may represent a lesser known area of consent for undergraduates and/or areas that they do not tend to view or notice on consent forms.

Much like Study 1, if we look only to the preexisting knowledge statement data, we might conclude that undergraduates have strong knowledge for consent. However, the open-text data showed their overall understanding of consent practices was much lower. Hence, based on these data, we estimate undergraduates’ baseline knowledge of consent as low-to-moderate, consistent with similar consent literature (e.g., Perrault & Nazione, Citation2016).

What are participants’ consent preferences (i.e., what do they expect from IRBs and researchers)?

Next, we examined undergraduates’ preferences for consent practices (see ). Participants reported strong agreement (i.e., > 5, where 6 = strongly agree) with nine preference statements across six consent components: voluntariness, methods, participant rights, confidentiality, and risks; and somewhat strong agreement with three preference statements (i.e., > 4), for components such as study’s purpose, confidentiality, and incentives. Participants indicated that they neither agreed nor disagreed (i.e., > 3) with two consent preference statements on two components: methods and benefits. These results suggest that undergraduates have strong preferences in favor of these consent practices (e.g., voluntariness, methods, risks).

Participants indicated somewhat strong (i.e., > 2, where 0 = strongly disagree) disagreement with two preference statements across two components: benefits and declarations of interest. Specifically in terms of benefits, participant preference ratings indicated they want benefits listed on the consent form. Participants also reported strong disagreement with one risk statement, meaning that participants want risks listed on the consent form.Footnote11 Regarding risk communication for undergraduates, their preference responses indicate they want all potential risks associated with participation reported at consent as opposed to no risks included at consent.

Thus, our results indicate that undergraduates want critical consent components to continue to form part of the consent process (e.g., methods, rights, confidentiality, risks, purpose, and incentives). Because undergraduates indicated a strong preference toward voluntariness, these critical consent components should continue to work together to help participants feel they have enough information to make an informed decision regarding participation; a decision that is also free of coercion. Of note, several consent components—including method, risk, and confidentiality-related information—are parts of consent that undergraduates seem more likely to read first (Douglas et al., Citation2021). Indeed, future research could use eye-tracking technology to confirm which sections of informed consent participants engage with, building upon prior research that uses eye-tracking to assess consent behavior more generally (e.g., how number of informed consent pages affects reading; Rosa et al., Citation2019; Russell et al., Citation2019).

Although disclosing any potential risk associated with participation—whether these risks are significant or minor—may present other ethical issues (e.g., warning people of a negative outcome may inadvertently cause that outcome to occur [nocebo effect]; Abu-Rus et al., Citation2019), undergraduates report a strong preference for including all risks during the consent process. Indeed, this consent preference is helpful for researchers and IRBs to consider when formulating consent forms specific to undergraduates. One way to honor undergraduates’ preferences and balance against potential nocebo effects is to closely consider how risk information is presented. For instance, recent research suggests that applying framing effects (e.g., “8 out of 10 people will not experience side effects”) to risk information presentation may attenuate nocebo effects (e.g., Barnes et al., Citation2019; Faasse, Citation2019; Webster et al., Citation2018). Exploring such presentation options within the context of psychological research consent forms may assist in balancing participant preferences with harm minimization.

Finally, undergraduate preferences for consent highlight two potential areas of consent that may be shortened and/or not included. Undergraduates appeared indifferent about the idea of including sample questions (as part of a method explanation) or including community-related benefits. Therefore, for an undergraduate sample, researchers could provide a general method overview (e.g., view a film and answer questionnaires regarding emotions) and focus on benefits that are self-related (i.e., to the participant).

Do Participant Consent Preferences Differ Based on Prior Trauma-Exposure?

Here, we also examined whether participants’ consent preferences differed based on prior trauma-exposure. Again, as per our sensitivity analysis, we could reliably detect significant effects at d = 0.41 and above. See for descriptive and inferential statistics in full. Comparable to prior traumatic event exposure estimates in undergraduate samples (e.g., Frazier et al., Citation2009), a majority of our participants reported prior trauma-exposure (73.03%; no trauma-exposure = 26.97%).

Table 6. Descriptive and inferential statistics, including Bayes factors, for pre-existing knowledge and consent preference statement group comparisons.

We ran a series of independent samples t-test and corrected for multiple comparisons (i.e., adjusted statistical significance: p < .003); we also report corrected values (i.e., unequal variances assumed) where Levene’s test was violated. Overall, our analyzes revealed that trauma-exposed and non-trauma-exposed participants had similar preferences, ps: .130–.931, ds: 0.01–0.21. We also calculated Bayes Factors. Here, we found substantial evidence in favor of the null hypothesis (BF10: 0.16–0.24), relative to the alternative hypothesis, for 13 preference statements, in addition to anecdotal evidence in favor of the null hypothesis—over the alternative hypothesis—for four preference statements (BF10: 0.34–0.43). Finally, when considering our results from a traditional significance perspective (i.e., without correction for multiple comparisons), none of our analyzes reached significance (i.e., p < .05).

Taken together then, our results indicate that overall, undergraduates have similar preferences for consent practices regardless of prior trauma-exposure. Importantly, undergraduates’ preferences for the communication of risks during the consent process did not seem to differ based on trauma-exposure.

What Do Participants Want to Change, if Anything, About Current Consent Guidelines?

Just over half of our participants reported that they were content with current ethics guidelines (i.e., wanted no change; 60.8%; see ). Some participants reported wanting various specific changes relating to: providing more detail (e.g., about researcher’s qualifications; 4.2%), removing parts of consent that currently exist (3%), improving risk information (e.g., including more discussion of risks; 2.3%), seeing study results (2.3%), including mental health service referrals (2.3%), including explicit debriefing information when deceit is used (1.9%), improving understanding of rights (1.1%), and presenting consent information in a simplified way (0.4%). Additionally, 5.7% of participants reported changes that are already enacted in current guidelines (5.7%). Some participants did not respond to the question (20.8%) and other participants indicated they were unsure whether guidelines should change (0.8%); the remaining responses were unclear and were not coded (3.4%).

Here, a substantial portion of participants indicated that they were currently satisfied with consent information. Although we did not code it as such, potentially the few participants who did not respond to the question were also satisfied with current guidelines (i.e., desired no change). Most participants requested changes to consent centered around consent information that—in line with current ethical guidelines—should already be enacted (National Health and Medical Research Council, Citation2018; Public Welfare, Citation2018). For example, information regarding risks, and information about how the study’s results can be obtained (e.g., via publication of the results), already form parts of consent. Together, our undergraduate feedback suggests the highlighted consent information areas need to be more consistently employed by researchers/IRBs.

General discussion

In two studies, we examined how effective current ethical guidelines are (i.e., from British Psychological Society, Citation2021; National Health and Medical Research Council, Citation2018; Public Welfare, Citation2018) from the participant’s perspective; specifically in trauma-exposed and non-trauma-exposed participants. Overall, we found that participants across both samples were generally satisfied with current consent guidelines (e.g., in terms of information provided) and expressed preferences that align with the current system (e.g., making consent information more consistent across IRBs/studies). Notably, there was a small yet consistent trend of participants reporting seemingly unethical behavior from IRBs and/or researchers (e.g., inaccurate IRB information provided if at all provided). Thus, our study serves as a reminder to IRBs and researchers alike to engage in good faith with the consent process. Further, participants showed some—albeit limited—knowledge regarding consent information they know they should receive. Importantly, we found that preferences and knowledge were similar across both samples, regardless of trauma-exposure. We interpret and discuss the implications of these findings below.

Notably, irrespective of prior trauma-exposure, participant preferences for consent were similar, particularly for core consent components (e.g., voluntariness, risks, methods etc). These similarities across both samples occurred despite variability between the samples (i.e., differences in perceived prior study experience, mean age, prior education, and gender distribution). Our finding contradicts areas of IRB and researcher apprehension regarding psychological trauma research (described in Jaffe et al., Citation2015; Newman et al., Citation2006). For instance, our data—two samples that both comprised at least two thirds of people reporting exposure to a Criterion-A trauma—suggest people who have prior trauma-exposure do not warrant special precautions as part of the informed consent process. Recall that more than half of participants preferred no change to current ethical guidelines and few participants required drastic changes to meet their needs. Across both samples, several predominant suggestions for consent guideline improvement were also either already addressed in current ethical guidelines or specific to the sample-type and unrelated to prior trauma-exposure (i.e., MTurk worker preferences relating to pay). Indeed, our results suggest that trauma-exposed participants are satisfied with current ethical guidelines and do not necessitate procedures specific to an ethically “vulnerable” population (Newman & Kaloupek, Citation2009), as might be applied in cases of children, etc (National Health and Medical Research Council, Citation2018). In fact, most participants’ feedback about consent related to actions that researchers and/or IRBs could implement immediately (e.g., ensuring consistency across guideline implementation, providing more detail). Together then, for undergraduate and crowdsourcing samples, our results suggest that prior trauma-exposure does not impact participants’ consent preferences.

Turning to preferences more generally, both samples expressed some unique requests. For instance, MTurk workers highlighted issues around pay and concerns about accurate completion times. And undergraduates expressed that most—if not all—potential associated risks of participation should be included on consent forms. Yet, our samples—despite differences in characteristics (e.g., education, age distribution)—expressed several similar consent preferences (e.g., including information at consent that should already be included, such as risk, data use [including how to access study findings], and withdrawal information). Our findings thus do not fit with some prior research regarding psychological consent form improvement (e.g., Douglas et al., Citation2021; Perrault & Keating, Citation2018), though this prior research occurred in non-psychologically sensitive areas. Hence, future research should examine the efficacy of some participant suggestions (e.g., bolding risk-related information)—provided here—within a psychologically sensitive research context.

Moreover, our preference findings do not support some prior, well-intentioned, researcher suggestions to remove parts of consent or shorten the length of forms (Perrault & Keating, Citation2018; Perrault & Nazione, Citation2016). However, we must consider how our sample’s consent preferences fit with prior research on consent behavior that shows participants generally do not read/skim consent forms, even when alternative consent forms are offered (e.g., shorter form length, bolding important information; McNutt et al., Citation2008; Perrault & Nazione, Citation2016; Ripley et al., Citation2018). Here, we argue that it is more important for participants to know they have access to all relevant consent information, even if they choose not to engage with it. Indeed, prior research found 41.8% of participants said they would read a consent form if they felt it concerned an important issue to them, but this importance did not typically extend to psychological research consent forms (Perrault & Keating, Citation2018). Perhaps then, if we know amendments to consent do not seem to boost consent engagement, but we want informed participants, we should continue to focus our efforts on delivery format. For instance, prior research found having an experimenter present while participants engaged in the consent process meant participants were more likely to read the form (Ripley et al., Citation2018). Therefore, future research should continue to investigate effective delivery formats to balance participant preferences (i.e., critical consent components they want included) against their behavior. One example is to include more interactive elements at consent to boost attention (Geier et al., Citation2021).

Regarding participants’ preexisting knowledge of consent, our data revealed that participants had minimal-to-moderate existing knowledge. We note differences between our measures of preexisting knowledge. One explanation for this discrepancy is that our consent statements reminded participants of other aspects of consent they had forgotten about—during free recall—and so they rated these aspects as true. Alternatively, perhaps participants engaged in socially desirable responding (e.g., participants already consented to participate so perhaps they wanted to be seen as “good” by responding in line with what they thought we would want them to know; Crowne & Marlowe, Citation1960; Rasinski et al., Citation1999). Regardless, just as prior consent research has reported participants’ low-to-moderate consent form comprehension (e.g., Ripley et al., Citation2018), we find a similar pattern of results here. One counter explanation for participants choosing not to read and/or skim consent forms is that they have strong basic knowledge of psychological consent practices and therefore do not feel they need to closely engage with consent. Yet, our findings do not support this explanation. Rather, our findings add to the idea that many participants consent to psychological research studies without being properly informed, albeit because they likely believe the information is not important enough to read (Perrault & Keating, Citation2018).

We originally suspected our samples may differ in their knowledge results because of differences in study completion experience. But that was not the case despite observable differences in reported experience (i.e., crowdsourced = very experienced with median of 5000 studies, undergraduates = somewhat experienced with M = 5.43). Our data therefore provide further evidence that removing parts of consent or shortening consent may not be a viable solution moving forward. Participants do not seem to holistically know what they should be told—and to what standard—at consent, hence removing existing information at consent may mean they are disadvantaged by being less informed (i.e., less access to information they do not know they should have). Moving forward, baseline knowledge for participants could be improved by requesting they complete a standardized consent training course (i.e., informing them of their rights, key elements in consent that may differ between studies, and key elements to focus on when reading consent forms). Afterward, participants could be provided with a handout that helps them navigate consent (e.g., what different terms mean). This suggested approach would provide participants with a foundation for consent and equip them with the skills to identify when consent information is not adequate – as was sometimes reported here by participants citing inclusion of consent information that should already be enacted.

Our study has limitations. We did not collect information relating to current psychopathology (e.g., PTSD symptomology, depression, anxiety) or details of prior trauma-exposure (e.g., type of event, repeated). We chose not to collect such data because we wanted to capture all participants’ consent views without changing the study into a “trauma-related” study itself (e.g., by having to include trauma questionnaire information in the advertisement). However, it is possible that participants who have experienced certain types of traumatic events differ in their consent preferences. For instance, some researchers have raised concerns about people who have experienced interpersonal violence and whether these participants may feel coerced during the consent process (Newman & Kaloupek, Citation2009). Thus, gathering participant consent preferences from specific trauma populations would enrich our understanding of participant consent needs. Additionally, our samples’ high reported trauma-exposure rates may raise questions about how meaningful it is to divide participants by trauma-exposure alone. Yet, this limitation provides additional support for the idea that IRB concerns regarding trauma-related research are likely unfounded. If psychological samples comprise a majority of trauma-exposed people, and these participants have similar preferences for consent procedures as non-trauma-exposed participants, it is not necessarily meaningful for IRBs to differentiate on trauma-exposure alone either. Future research however should investigate consent preferences based on additional psychopathology measures (e.g., PTSD symptomology, trauma-exposure type).

In Study 1, crowdsourced participants reported completing an average of over 14,000 studies. One possibility is that this estimate reflects bot responding. However, we believe this possibility is unlikely because we used several strategies to maintain data quality and minimize bot/server farmer responding (see https://osf.io/gjryt for strategies in full). Importantly, we used Cloud Research approved participants (i.e., who passed Cloud Research’s attention and engagement measures; Hauser et al., Citation2023) with settings such as: blocking suspicious geocode locations and limiting approval rating to 95%-100%. An alternative explanation for participants’ varied frequency estimates is that they used different decision-making strategies to estimate their prior study completion (e.g., breaking down approximately how studies they complete daily and multiplying by their imagined lifetime completion rate; e.g., Brown, Citation1995). Some of these strategies may have been effective, while others may have been biased by decision-making heuristics (e.g., relying on information that comes to mind quickly and with ease; e.g., Dale, Citation2015), resulting in overestimation. Indeed, the study frequency data was strongly positively skewed, with outliers present, and is similar to prior studies investigating MTurk (Cloud Research) use in terms of large standard deviations for yearly estimates (e.g., Douglas et al., Citation2023) and more than half of participants reporting they use MTurk for more than 8 hours per week (Peer et al., Citation2022). Our data highlight the importance of measuring participants’ prior study completion frequency, and other related topics, when relying on crowdsourced data. Doing so will help researchers understand this unique population.

Here, we strictly assessed information that is provided to participants during the consent process and not other areas of consent (e.g., whether people actually withdraw from participation vs. saying they know how to, coercion). Although some prior consent research shows that people exposed to traumatic events can refuse participation and/or withdraw from sensitive psychological research (Brabin & Berah, Citation1995; Hlavka et al., Citation2007), it would be useful for IRBs and researchers to explore other participants’ views and preferences on different operationalizations of consent. Here, we focused on two commonly used sample types within psychological research (i.e., crowdsourced and undergraduate students, albeit in two different Western countries) and thus our findings are specific to these sample types. Future research should address whether participant preferences found in the current study hold across different samples (e.g., US undergraduate students, clinical populations, community members), although we note that US undergraduates typically appear to be viewed as generalizable to other undergraduate populations. Finally, the consent statements participants responded to were developed based on Western ethical guidelines and therefore the results here cannot be generalized beyond this context.

Together, our research provides evidence on crowdsourced and undergraduate participants’ views and preferences for consent practices, particularly where sensitive research is concerned. Notably, we found similar findings for consent views among crowdsourced and undergraduate participants, irrespective of trauma-exposure. Thus, we hope these data can act as both a guide and reminder for IRBs and researchers when formulating consent processes for these samples. And, that these data serve as a basis for further research into how we can address ethical issues relating to consent for our participants.

Supplemental material

Supplemental Material

Download MS Word (21.9 KB)

Acknowledgments

We thank Olivia Moller for her assistance in coding our open-text data.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/10508422.2024.2341639.

Notes

1 We chose these samples because they are frequently used within psychological research (e.g., Luong & Lomanowska, Citation2022; Strickland & Stoops, Citation2019) and thus, would likely provide valuable insight into consent views for these populations (i.e., we could generalize our findings to these samples). These are also the convenience samples we had access to, given US crowdsourced participants are typically easier to source than Australian crowdsourced participants.

2 To ask participants this question, our ethics committee requested that we present participants with a second consent form specific to the Criterion-A question. Doing so allowed us to capture participants’ preexisting knowledge and preferences for consent if they had experienced a past traumatic event; such participants may have otherwise avoided participating if a risk warning was included in the first consent form.

3 After these questionnaires, participants went on to complete an imagined consent risk presentation options task and rate the risk presentation options. These data are reported in a separate manuscript currently under preparation.

4 Three participants chose not to respond to one of the reason items (different items per participant) and therefore were left out of the analysis. Participants included in these analyzes: n = 265.

5 Because we changed the way we collected data for decision to participate prior to starting data collection – that is, we asked people to rate their agreement on a Likert-type scale – we deviated from our pre-registered plan to use Chi-square comparisons. This was due to an oversight on the author’s behalf and also applies to this analysis in Study 2.

6 We repeated these analyzes after removing participants who did not consent to answering the Criterion-A question. There was minimal difference between means (i.e., 0.01–0.30 change) and therefore we include results for the interested reader at: https://osf.io/gnwq4/.

7 We repeated these analyzes after removing participants who did not consent to answering the Criterion-A question. There was minimal difference between means (i.e., 0.01–0.30 change) and therefore we include results for the interested reader at: https://osf.io/gnwq4/.

8 We also tested whether prior trauma-exposure influenced participant’s preexisting knowledge; it was not. However, because this analysis was not central to our question of participant preferences, nor was it pre-registered, we include here https://osf.io/gnwq4/ for the interested reader.

9 We removed unclear responses from this descriptive analysis (e.g., 2 topics, a lot, I am in my first year) because we could not accurately code them, however due to the nature of our sample, we can assume many participants were participating in psychological research for the first time that year.

10 We repeated these analyzes after removing participants who did not consent to answering the Criterion-A question (n = 241). There was minimal difference between means (i.e., 0.01–0.50 change) and therefore we include results for the interested reader at: https://osf.io/ru8e5/.

11 We repeated these analyzes after removing participants who did not consent to answering the Criterion-A question (n = 241). There was minimal difference between means (i.e., 0.01–0.50 change) and therefore we include results for the interested reader at: https://osf.io/ru8e5/.

REFERENCES

  • Abu-Rus, A., Bussell, N., Olsen, D. C., Davis-Ku, M. A. A. L., & Arzoumanian, M. A. (2019). Informed consent content in research with survivors of psychological trauma. Ethics & Behavior, 29(8), 595–606. https://doi.org/10.1080/10508422.2018.1551802
  • Albala, I., Doyle, M., & Appelbaum, P. S. (2010). The evolution of consent forms for research: A quarter century of changes. IRB: Ethics & Human Research, 32(3), 7–11.
  • Allison, R. D., Abbott, L. J., & Wichman, A. (2008). Roles and experiences of non-scientist institutional review board members at the National Institutes of Health. IRB, 30(5), 8.
  • Amazon Mechanical Turk. (2018, September 10). Amazon Mechanical Turk. https://www.mturk.com/
  • American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.).
  • Barnes, K., Faasse, K., Geers, A. L., Helfer, S. G., Sharpe, L., Colloca, L., & Colagiuri, B. (2019). Can positive framing reduce nocebo side effects? Current evidence and recommendation for future research. Frontiers in Pharmacology, 10, 167. https://doi.org/10.3389/fphar.2019.00167
  • Becker-Blease, K. A., & Freyd, J. J. (2006). Research participants telling the truth about their lives: The ethics of asking and not asking about abuse. American Ppsychologist, 61(3), 218. https://doi.org/10.1037/0003-066x.61.3.218
  • Benjet, C., Bromet, E., Karam, E. G., Kessler, R. C., McLaughlin, K. A., Ruscio, A. M., Shahly, V., Stein, D. J., Petukhova, M., Hill, E., Alonso, J., Atwoli, L., Bunting, B., Bruffaerts, R., Caldas-de-Almeida, J. M., de Girolamo, Florescu, G. S., Gureje, O., Huang, Y., Lepine, J. P., Kawakami, N., … Koenen, K. C. (2016). The epidemiology of traumatic event exposure worldwide: Results from the World Mental Health Survey Consortium. Psychological Medicine, 46(2), 327–343. https://doi.org/10.1017/s0033291715001981
  • Bowman, N. A., & Holmes, J. M. (2018). Getting off to a good start? first-year undergraduate research experiences and student outcomes. Higher Education, 76(1), 17–33. https://doi.org/10.1007/s10734-017-0191-4
  • Boyer Commission on Educating Undergraduates in the Research University. (1998). Reinventing undergraduate education: A blueprint for America’s research universities. State University of New York at Stony Brook for the Carnegie Foundation for the Advancement of Teaching.
  • Brabin, P. J., & Berah, E. F. (1995). Dredging up past traumas: Harmful or helpful. Psychiatry Psychology, & Law, 2(2), 165. https://doi.org/10.1080/13218719509524863
  • Brashers, D. E. (2001). Communication and uncertainty management. Journal of Communication, 51(3), 477–497. https://doi.org/10.1111/j.1460-2466.2001.tb02892.x
  • Brashers, D. E., Goldsmith, D. J., & Hsieh, E. (2002). Information seeking and avoiding in health contexts. Human Communication Research, 28(2), 258–271. https://doi.org/10.1111/j.1468-2958.2002.tb00807.x
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77. https://doi.org/10.1191/1478088706qp063oa
  • Bridgland, V. M. E., & Takarangi, M. K. T. (2022). Something distressing this way comes: The effects of trigger warnings on avoidance behaviors in an analogue trauma task. Behavior Therapy, 53(3), 414–427. https://doi.org/10.1016/j.beth.2021.10.005
  • British Psychological Society. (2021). BPS code of human research ethics. https://www.bps.org.uk/sites/www.bps.org.uk/files/Policy/Policy%20-%20Files/BPS%20Code%20of%20Human%20Research%20Ethics.pdf
  • Brown, N. R. (1995). Estimation strategies and the judgment of event frequency. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(6), 1539–1553. https://doi.org/10.1037//0278-7393.21.6.1539
  • Burgess, M. M. (2007). Proposing modesty for informed consent. Social Science & Medicine, 65(11), 2284–2295. https://doi.org/10.1016/j.socscimed.2007.08.006
  • Campbell, R., Goodman-Williams, R., & Javorka, M. (2019). A trauma-informed approach to sexual violence research ethics and open science. Journal of Interpersonal Violence, 34(23–24), 4765–4793. https://doi.org/10.1177/0886260519871530
  • Carlson, E. B., Newman, E., Daniels, J. W., Armstrong, J., Roth, D., & Loewenstein, R. (2003). Distress in response to and perceived usefulness of trauma research interviews. Journal of Trauma & Dissociation, 4(2), 131–142. https://doi.org/10.1300/j229v04n02_08
  • Chilton, L. B., Horton, J. J., Miller, R. C., & Azenkot, S. (2010). Task search in a human computation market. Proceedings of the ACM SIGKDD workshop on human computation (pp. 1–9). https://doi.org/10.1145/1837885.1837889
  • Collogan, L. K., Tuma, F., Dolan-Sewell, R., Borja, S., & Fleischman, A. R. (2004). Ethical issues pertaining to research in the aftermath of disaster. Journal of Traumatic Stress, 17(5), 363–372. https://doi.org/10.1023/b:jots.0000048949.43570.6a
  • Cook, A. F., & Hoas, H. (2011). Trading places: What the research participant can tell the investigator about informed consent. Journal of Clinical Research Bioethics, 2(8). https://doi.org/10.4172/2155-9627.1000108
  • Cromer, L. D., Freyd, J. J., Binder, A. K., DePrince, A. P., & Becker-Blease, K. (2006). What’s the risk in asking? Participant reaction to trauma history questions compared with reaction to other personal questions. Ethics & Behavior, 16(4), 347–362. https://doi.org/10.1207/s15327019eb1604_5
  • Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24(4), 349–354. https://doi.org/10.1037/h0047358
  • Dale, S. (2015). Heuristics and biases: The science of decision-making. Business Information Review, 32(2), 93–99. https://doi.org/10.1177/0266382115592536
  • DePrince, A. P., & Freyd, J. J. (2004). Costs and benefits of being asked about trauma history. Journal of Trauma Practice, 3(4), 23–35. https://doi.org/10.1300/J189v03n04_02
  • Douglas, B. D., Ewell, P. J., Brauer, M., & Hallam, J. S. (2023). Data quality in online human-subjects research: Comparisons between MTurk, prolific, CloudResearch, qualtrics, and SONA. Public Library of Science ONE, 18(3), e0279720. https://doi.org/10.1371/journal.pone.0279720
  • Douglas, B. D., McGorray, E. L., & Ewell, P. J. (2021). Some researchers wear yellow pants, but even fewer participants read consent forms: Exploring and improving consent form reading in human subjects research. Psychological Methods, 26(1), 61. https://doi.org/10.1037/met0000267
  • Du Mont, J., & Stermac, L. (1996). Impact of free condom distribution on the use of dual protection against pregnancy and sexually transmitted disease. Canadian Journal of Human Sexuality, 5(1), 25–29.
  • Edwards, K. M., Kearns, M. C., Calhoun, K. S., & Gidycz, C. A. (2009). College women’s reactions to sexual assault research participation: Is it distressing? Psychology of Women Quarterly, 33(2), 225–234. https://doi.org/10.1111/j.1471-6402.2009.01492.x
  • Epstein, Y. M., Suedfeld, P., & Silverstein, S. J. (1973). The experimental contract: Subjects’ expectations of and reactions to some behaviors of experimenters. American Psychologist, 28(3), 212. https://doi.org/10.1037/h0034454
  • Faasse, K. (2019). Nocebo effects in health psychology. Australian Psychologist, 54(6), 453–465. https://doi.org/10.1111/ap.12392
  • Fontes, L. A. (2004). Ethics in violence against women research: The sensitive, the dangerous, and the overlooked. Ethics & Behavior, 14(2), 141–174. https://doi.org/10.1207/s15327019eb1402_4
  • Frazier, P., Anders, S., Perera, S., Tomich, P., Tennen, H., Park, C., & Tashiro, T. (2009). Traumatic events among undergraduate students: Prevalence and associated symptoms. Journal of Counseling Psychology, 56(3), 450. https://doi.org/10.1037/a0016412
  • Geier, C., Adams, R. B., Mitchell, K. M., & Holtz, B. E. (2021). Informed consent for online research—is anybody reading?: Assessing comprehension and individual differences in readings of digital consent forms. Journal of Empirical Research on Human Research Ethics, 16(3), 154–164. https://doi.org/10.1177/15562646211020160
  • Hauser, D. J., Moss, A. J., Rosenzweig, C., Jaffe, S. N., Robinson, J., & Litman, L. (2023). Evaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk. Behavior Research Methods, 55(8), 3953–3964. https://doi.org/10.31234/osf.io/48yxj
  • Haverkamp, B. E. (2005). Ethical perspectives on qualitative research in applied psychology. Journal of Counselling Psychology, 52(2), 146. https://doi.org/10.1037/0022-0167.52.2.146
  • Hebenstreit, C. L., & DePrince, A. P. (2012). Perceptions of participating in longitudinal trauma research among women exposed to intimate partner abuse. Journal of Empirical Research on Human Research Ethics, 7(2), 60–69. https://doi.org/10.1525/jer.2012.7.2.60
  • Hlavka, H. R., Kruttschnitt, C., & Carbone-López, K. C. (2007). Revictimizing the victims? Interviewing women about interpersonal violence. Journal of Interpersonal Violence, 22(7), 894–920. https://doi.org/10.1177/0886260507301332
  • Jaffe, A. E., DiLillo, D., Hoffman, L., Haikalis, M., & Dykstra, R. E. (2015). Does it hurt to ask? A meta-analysis of participant reactions to trauma research. Clinical Psychology Review, 40, 40–56. https://doi.org/10.1016/j.cpr.2015.05.004
  • Jorm, A. F., Kelly, C. M., & Morgan, A. J. (2007). Participant distress in psychiatric research: A systematic review. Psychological Medicine, 37(7), 917–926. https://doi.org/10.1017/s0033291706009779
  • Kassam-Adams, N., & Newman, E. (2005). Child and parent reactions to participation in clinical research. General Hospital Psychiatry, 27(1), 29–35. https://doi.org/10.1016/j.genhosppsych.2004.08.007
  • Kilgo, C. A., Pasquesi, K., Sheets, J. K. E., & Pascarella, E. T. (2014). The estimated effects of participation in service-learning on liberal arts outcomes. International Journal of Research on Service-Learning and Community Engagement, 2(1), 18–31. https://doi.org/10.37333/001c.002001003
  • Kilpatrick, D. G., Resnick, H. S., Milanak, M. E., Miller, M. W., Keyes, K. M., & Friedman, M. J. (2013). National estimates of exposure to traumatic events and PTSD prevalence using DSM‐IV and DSM‐5 criteria. Journal of Traumatic Stress, 26(5), 537–547. https://doi.org/10.1002/jts.21848
  • Legerski, J. P., & Bunnell, S. L. (2010). The risks, benefits, and ethics of trauma-focused research participation. Ethics & Behavior, 20(6), 429–442. https://doi.org/10.1080/10508422.2010.521443
  • Luong, R., & Lomanowska, A. M. (2022). Evaluating Reddit as a crowdsourcing platform for psychology research projects. Teaching of Psychology, 49(4), 329–337. https://doi.org/10.1177/00986283211020739
  • Mann, T. (1994). Informed consent for psychological research: Do subjects comprehend consent forms and understand their legal rights? Psychological Science, 5(3), 140–143. https://doi.org/10.1111/j.1467-9280.1994.tb00650.x
  • Mathews, B., MacMillan, H. L., Meinck, F., Finkelhor, D., Haslam, D., Tonmyr, L., Gonzalez, A., Afifi, T. O., Scott, J. G., Pacella, R. E., Higgins, D. J., Thomas, H., Collin-Vezina, D., and Walsh, K. (2022). The ethics of child maltreatment surveys in relation to participant distress: Implications of social science evidence, ethical guidelines, and law. Child Abuse & Neglect, 123. https://doi.org/10.1016/j.chiabu.2021.105424
  • McClinton Appollis, T., Lund, C., de Vries, P. J., & Mathews, C. (2015). Adolescents’ and adults’ experiences of being surveyed about violence and abuse: A systematic review of harms, benefits, and regrets. American Journal of Public Health, 105(2), e31–e45. https://doi.org/10.2105/AJPH.2014.302293
  • McNutt, L. A., Waltermaurer, E., Bednarczyk, R. A., Carlson, B. E., Kotval, J., McCauley, J., Campbell, J. C., and Ford, D. E. (2008). Are we misjudging how well informed consent forms are read? Journal of Empirical Research on Human Research Ethics, 3(1), 89–97. https://doi.org/10.1525/jer.2008.3.1.89
  • Moeck, E. K., Bridgland, V. M., & Takarangi, M. K. (2022). Food for thought: Commentary on Burnette et al. (2021).Concerns and recommendations for using Amazon MTurk for eating disorder research. The International Journal of Eating Disorders, 55(2), 282–284. https://doi.org/10.1002/eat.23671
  • National Health and Medical Research Council. (2018). National statement on ethical conduct in human research. Canberra, Australia: National Health and Medical Research Council. https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018
  • Newman, E. (2008). Assessing trauma and its effects without distress: A guide to working with IRBs. APS Observer, 21. https://www.psychologicalscience.org/observer/assessing-trauma-and-its-effects-without-distress-a-guide-to-working-with-irbs
  • Newman, E., & Kaloupek, D. (2009). Overview of research addressing ethical dimensions of participation in traumatic stress studies: Autonomy and beneficence. Journal of Traumatic Stress: Official Publication of the International Society for Traumatic Stress Studies, 22(6), 595–602. https://doi.org/10.1002/jts.20465
  • Newman, E., Risch, E., & Kassam-Adams, N. (2006). Ethical issues in trauma-related research: A review. Journal of Empirical Research on Human Research Ethics, 1(3), 29–46. https://doi.org/10.1525/jer.2006.1.3.29
  • Peer, E., Rothschild, D., Gordon, A., Evernden, Z., & Damer, E. (2022). Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 1. https://doi.org/10.3758/s13428-021-01694-3
  • Perrault, E. K., & Keating, D. M. (2018). Seeking ways to inform the uninformed: Improving the informed consent process in online social science research. Journal of Empirical Research on Human Research Ethics, 13(1), 50–60. https://doi.org/10.1177/1556264617738846
  • Perrault, E. K., & McCullock, S. P. (2019). Concise consent forms appreciated—still not comprehended: Applying revised common rule guidelines in online studies. Journal of Empirical Research on Human Research Ethics, 14(4), 299–306. https://doi.org/10.1177/1556264619853453
  • Perrault, E. K., & Nazione, S. A. (2016). Informed consent—uninformed participants: Shortcomings of online social science consent forms and recommendations for improvement. Journal of Empirical Research on Human Research Ethics, 11(3), 274–280. https://doi.org/10.1177/1556264616654610
  • Public Welfare. 45, C.F.R. § 46.102. (2018).
  • Rasinski, K. A., Willis, G. B., Baldwin, A. K., Yeh, W., & Lee, L. (1999). Methods of data collection, perceptions of risks and losses, and motivation to give truthful answers to sensitive survey questions. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory & Cognition, 13(5), 465–484. https://doi.org/10.1002/(SICI)1099-0720(199910)13:5<465:AID-ACP609>3.0.CO;2-Y
  • Ripley, K. R., Hance, M. A., Kerr, S. A., Brewer, L. E., & Conlon, K. E. (2018). Uninformed consent? The effect of participant characteristics and delivery format on informed consent. Ethics & Behavior, 28(7), 517–543. https://doi.org/10.1080/10508422.2018.1456926
  • Rosa, P. J., Lopes, P., Oliveira, J., & Pascoal, P. (2019). Does length really matter? Effects of number of pages in the informed consent on reading behavior: An eye-tracking study. New Technologies to Improve Patient Rehabilitation: 4th Workshop, REHAB 2016 (pp. 116–125). Springer International Publishing, Lisbon, Portugal, October 13-14, 2016. Revised Selected Papers 4. https://doi.org/10.1007/978-3-030-16785-1_9
  • Rothstein, W. G., & Phuong, L. H. (2007). Ethical attitudes of nurse, physician, and unaffiliated members of institutional review boards. Journal of Nursing Scholarship, 39(1), 75–81. https://doi.org/10.1111/j.1547-5069.2007.00147.x
  • Russell, C., Thompson, J., McGee, H., & Slavin, S. (2019). “I consent”: an eye-tracking study of IRB Informed Consent Forms. Clemson, SC. ACM. Retrieved November 16, 2023, from http://andrewd.ces.clemson.edu/courses/cpsc412/fall19/teams/reports/group08.pdf
  • Ruzek, J. I., & Zatzick, D. F. (2000). Ethical considerations in research participation among acutely injured trauma survivors: An empirical investigation. General Hospital Psychiatry, 22(1), 27–36. https://doi.org/10.1016/s0163-8343(99)00041-9
  • Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47(5), 609–612. https://doi.org/10.1016/j.jrp.2013.05.009
  • Schönbrodt, F. D., & Perugini, M. (2018). At what sample size do correlations stabilize? Corrigendum. https://doi.org/10.1016/j.jrp.2018.02.010
  • Singer, E. (1984). Public reactions to some ethical issues of social research: Attitudes and behavior. Journal of Consumer Research, 11(1), 501–509. https://doi.org/10.1086/208986
  • Strickland, J. C., & Stoops, W. W. (2019). The use of crowdsourcing in addiction science research: Amazon Mechanical Turk. Experimental and Clinical Psychopharmacology, 27(1), 1. https://doi.org/10.1037/pha0000235
  • Varnhagen, C. K., Gushta, M., Daniels, J., Peters, T. C., Parmar, N., Law, D., Hirsch, R., Takach, B. S., and Johnson, T. (2005). How informed is online informed consent? Ethics & Behavior, 15(1), 37–48. https://doi.org/10.1207/s15327019eb1501_3
  • Webster, R. K., Weinman, J., & Rubin, G. J. (2018). Positively framed risk information in patient information leaflets reduces side effect reporting: A double-blind randomized controlled trial. Annals of Behavioral Medicine, 52(11), 920–929. https://doi.org/10.1093/abm/kax064
  • Wells, R. E., & Kaptchuk, T. J. (2012). To tell the truth, the whole truth, may do patients harm: The problem of the nocebo effect for informed consent. The American Journal of Bioethics, 12(3), 22–29. https://doi.org/10.1080/15265161.2011.652798
  • Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., & Wagenmakers, E. J. (2011). Statistical evidence in experimental psychology: An empirical comparison using 855 t tests. Perspectives on Psychological Science, 6(3), 291–298. https://doi.org/10.1177/1745691611406923
  • Xu, A., Baysari, M. T., Stocker, S. L., Leow, L. J., Day, R. O., & Carland, J. E. (2020). Researchers’ views on, and experiences with, the requirement to obtain informed consent in research involving human participants: A qualitative study. BMC Medical Ethics, 21(1), 1–11. https://doi.org/10.1186/s12910-020-00538-7
  • Yeater, E. A., & Miller, G. F. (2014). ‘Sensitive’-topics research: Is it really harmful to participants? APS Observer, 27. https://www.psychologicalscience.org/observer/sensitive-topics-research-is-it-really-harmful-to-participants/comment-page-1
  • Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards. Psychological Science, 23(7), 780–787. https://doi.org/10.1177/0956797611435131