1,062
Views
0
CrossRef citations to date
0
Altmetric
Articles

Clearing the way for participatory data stewardship in artificial intelligence development: a mixed methods approach

ORCID Icon, , &
Pages 1782-1799 | Received 10 May 2023, Accepted 28 Nov 2023, Published online: 18 Dec 2023

Abstract

Participatory data stewardship (PDS) empowers individuals to shape and govern their data via responsible collection and use. As artificial intelligence (AI) requires massive amounts of data, research must assess what factors predict consumers’ willingness to provide their data to AI. This mixed-methods study applied the extended Technology Acceptance Model (TAM) with additional predictors of trust and subjective norms. Participants’ data donation profile was also measured to assess the influence of individuals’ social duty, understanding of the purpose and guilt. Participants (N = 322) completed an experimental survey. Individuals were willing to provide data to AI via PDS when they believed it was their social duty, understood the purpose and trusted AI. However, the TAM may not be a complete model for assessing user willingness. This study establishes that individuals value the importance of trusting and comprehending the broader societal impact of AI when providing their data to AI.

Practitioner summary: To build responsible and representative AI, individuals are needed to participate in data stewardship. The factors driving willingness to participate in such methods were studied via an online survey. Trust, social duty and understanding the purpose significantly predicted willingness to provide data to AI via participatory data stewardship.

1. Introduction

Given the recent advancements in artificial intelligence (AI; Chow and Perrigo Citation2023; Spitale, Biller-Andorno, and Germani Citation2023) and the dangers of algorithmic bias (Buolamwini and Gebru Citation2018), participatory methods are required to increase the equity and efficiency of AI (Birhane et al. Citation2022). Participatory methods involve practices that inform individuals to shape and govern their data through the responsible collection and use of the data (see 2.1; Patel et al. Citation2021). AI is a manufactured object or entity that can meet or exceed the requirements of the assigned task when considering cultural and demographic circumstances (Kelly, Kaye, and Oviedo-Trespalacios Citation2023). AI offers new approaches to fields, such as health care and education, by analysing vast data sets to inform recommendations (Monteith et al. Citation2022). However, diverse human data are needed to train AI as, without contextual knowledge, representation and interpretation, AI may cause harm (Buolamwini and Gebru Citation2018; Chan et al. Citation2021).

Machine learning models, which rely on data for training and evaluation, can be biased and lead to discriminatory outcomes. For example, since its launch in 2020 ChatGPT has produced sexist and racist outputs, such as identifying white males as the standard of good scientists and intellectuals (Piantadosi Citation2023; Singh and Ramakrishnan Citation2023). In other instances, an AI mole scanner did not detect cancerous moles on dark skin types as it was trained on predominantly Caucasian skin tones (Goyal et al. Citation2020; Lashbrook Citation2018). Furthermore, AI machines are less likely to grant bank loans to women due to the available historical examples that overrepresent males (Eyers, Citation2021). This is true for similar under-representations of age, race and sexual orientation. In these instances, societal biases are mirrored and amplified in AI output due to the skewed data available.

Consequently, global policymakers and technology developers are seeking to ethically engage individuals in the value-driven design of policies and technologies, such as AI, by intentionally submitting their personal information to tackle societal issues (Kapoor and Whitt Citation2021; Parkes et al. Citation2023). While research has platformed the opinion of technology experts (Robertson and Maccarone Citation2022) and scholars (Couldry et al. Citation2018), understanding user attitudes towards AI is required to design ethical and practical devices (McMahon and Byrne Citation2008; Sloane et al. Citation2020). The absence of participatory methods results in AI devices being built from a solely technocratic perspective, disadvantaging the average user (Birhane et al. Citation2022). As data receivers, AI developers and researchers must actively collaborate with users to define and refine the design processes to minimise public risk (Gomez Ortega, Bourgeois, and Kortuem Citation2022). This paper aims to extract key themes that guide users’ willingness to provide their data to participate in AI data stewardship.

2. Background

2.1. Participatory data stewardship (PDS)

Participatory data stewardship (PDS) involves a person knowingly consenting to give their behavioural data to facilitate research and development (Gomez Ortega, Bourgeois, and Kortuem Citation2022). Examples of behavioural data include everything from details of menstrual cycles (Gomez Ortega, Bourgeois, and Kortuem Citation2022) to mobility data (Lawrence and Oh Citation2021). Historically, participation in AI development is entered somewhat unknowingly (e.g. training machine learning through reCAPTCHAs, ranking Uber drivers, using chatbots) with no direct or immediate benefit to the user (Sloane et al. Citation2020). For instance, OpenAI collects information about how each user interacts with ChatGPT, informing improvements to the software (Seger et al. Citation2023; Swisher Citation2023). However, when using ChatGPT, the provision and collection of data by the company may be overlooked as the user is not actively prompted to consent or object to submitting their data.

In contrast, a data steward determines what, when, how and with whom their private data is shared (Parkes et al. Citation2023; Young Citation2018). When fully realised, PDS encourages individuals, including those historically disenfranchised, to regain control and rebalance the asymmetries of traditional data collection used to train AI (Patel et al. Citation2021). Like open science initiatives, PDS fosters open data access via transparent, accessible and collaborative development (Stracke Citation2020). As such, PDS creates more significant data equity by addressing current data gaps and allowing individuals the agency to participate in the modern data economy to benefit both the consumer and the technology.

Interest in PDS has risen recently due to changes in data-sharing policies that enable data transference, such as the General Data Protection Regulation, a European data protection law (Araujo et al. Citation2022; European Commission Citation2022; Gomez Ortega, Bourgeois, and Kortuem Citation2022). Gomez Ortega, Bourgeois, and Kortuem (Citation2022) followed 35 participants from eight countries who donated their data from a menstrual cycle app for research purposes. They found that various reasons drove donors’ willingness to contribute their data, including the type of data, the effort expectancy, information presentation and the context (Gomez Ortega, Bourgeois, and Kortuem Citation2022). As such, this previous research suggests that different predictive factors may drive user willingness to provide their data to AI.

While Gomez Ortega, Bourgeois, and Kortuem (Citation2022) provided insights into the factors that drove data donation in a specific context, the current paper aims to integrate PDS literature into AI research. Specifically, this paper aims to research the provision of data for AI rather than the donation of data for no compensation. However, as literature in this field is sparse, we will draw upon data donation literature to inform our study. To date, much of the recent research is focused on the use of governmental data rather than PDS for AI fields (Schmidthuber, Hilgers, and Randhawa Citation2021; Wijnhoven, Ehrenhard, and Kuhn Citation2015). Furthermore, limited research has assessed the psychological determinants of providing such data (Pilz and Gewald Citation2013). Jarrahi et al. (Citation2023) state that the applicability of established psychological theories should be applied to assess AI use. As such, we will apply the Technology Acceptance Model (TAM; Davis Citation1985, Citation1989) to assess the behavioural drivers behind individuals’ willingness to provide their data to AI.

2.2. Willingness

Willingness describes an individual’s openness to performing a specific behaviour (Gibbons et al. Citation1998; Pomery et al. Citation2009). As such, measuring willingness represents how an individual believes they would react in a particular situation (Pomery et al. Citation2009). In their Prototype Willingness Model, Gibbons et al. (Citation1998) characterised willingness as an openness to a risky opportunity that manifests via a reaction when the opportunity arises (Gibbons et al. Citation1998). Alternatively, Fishbein (Citation2008) disagreed with Gibbons et al. (Citation1998), stating that intentions and willingness are highly correlated and both measure behaviour equally well, regardless of the situation (Pomery et al. Citation2009). Other studies have added to this claim, stating that willingness increases the predictive validity of behavioural intentions (Thornton, Gibbons, and Gerrard Citation2002; van Empelen and Kok Citation2006) and that willingness/intention is the primary determinant of actual use behaviour (Ajzen and Fishbein Citation1975). As such, willingness was selected over intentions as the dependent variable for the current study.

2.3. Technology acceptance model (TAM)

Technology acceptance models have been utilised to explain user intention, willingness and use for various novel and existing technologies, from online shopping technologies (Gefen, Karahanna and Straub Citation2003) to futuristic automated vehicles (Kaye et al. Citation2020; Meyer-Waarden and Cloarec Citation2021). The TAM (Davis Citation1985, Citation1989) is commonly used to measure intentions and actual behaviour. It was adapted from the Theory of Reasoned Action (Fishbein, Ajzen, and Belief Citation1975) and postulates that external variables, such as the media and social references, inform humans’ perceived usefulness (PU) and perceived ease of use (PEOU), which contribute to their intentions to use technology, ultimately driving their actual system usage (Davis Citation1985, Citation1989). Kelly, Kaye, and Oviedo-Trespalacios (Citation2023) reviewed research that assessed user acceptance of AI in different fields and found that the TAM was the most cited acceptance model, with PU positively predicting behavioural intention across multiple industries. Furthermore, the frequent extension of the TAM to include additional variables, such as subjective norms and trust, highlights its flexibility when researching acceptance amongst multiple contexts (Kelly, Kaye, and Oviedo-Trespalacios Citation2023).

2.3.1. Perceived usefulness (PU)

PU is defined as the degree to which a user perceives the technology as beneficial to their everyday life (Davis Citation1989). It is hypothesised that the more useful an individual perceives the technology, the more likely they are to use the device (Davis Citation1989). In the years since Davis’ (Citation1989) paper, the TAM has been adopted by a range of researchers who have consistently demonstrated that PU is the strongest positive predictor of an individual’s behavioural intention to use new technology when compared to PEOU (Davis Citation1989; Rafique et al. Citation2020; Venkatesh et al. Citation2003). As such, PU is well established as a significant positive predictor of behavioural intentions.

2.3.2. Perceived ease of use (PEOU)

PEOU refers to a user’s perception of how effortless a particular technological device would be to use (Davis Citation1989). As PEOU is only relevant to the intrinsic (i.e., technical) process of performing an activity, as opposed to the beneficial or entertaining aspects, it is reasoned to have a weaker influence on technology acceptance than PU (Davis Citation1989). Some studies have found that PEOU is not a significant predictor of behavioural intentions due to the heightened role of technology in society since Davis first proposed the model (Z. Liu, Shan, and Pigneur Citation2016; Mun et al. Citation2006; van Eeuwen Citation2017). As such, the relevance of PEOU in the TAM may depend on the context. For instance, PEOU may be higher when individuals have everyday contact with a device, such as a computer, compared to a virtual reality headset due to the infrequency of use and unfamiliarity with the functioning (Belanche, Casalo, and Flavian Citation2019; Kelly, Kaye, and Oviedo-Trespalacios Citation2023).

2.4. Subjective norms

Subjective norms is frequently included in acceptance models to measure the human desire to make decisions based on the desire to be approved by important others (Ajzen Citation1991; Kelly, Kaye, and Oviedo-Trespalacios Citation2023). In revising the TAM (i.e., TAM2), subjective norms was incorporated as a predictive measure of technology acceptance (Venkatesh and Davis Citation2000). Following this revision, subjective norms has been a significant predictor in TAM extensions to predict acceptance via attitudes and intentions (Lin et al. Citation2021; Memon and Memon Citation2021; Song Citation2019). For instance, Song (Citation2019) extended the TAM and found that behavioural intention to use AI virtual assistants increases with subjective norms. Therefore, subjective norms is a significant and positive predictor of behavioural intentions to use AI when included in the TAM.

2.5. Trust

Trust in AI can be defined as the reliance on an agent for an individual’s well-being (Kaplan et al. Citation2021). It is, therefore, a subjective construct that may differ depending on the individual, the technology and the context. Trust is required to accept the risk to privacy and personal autonomy accompanying the use of AI (Platt and Kardia Citation2015). Lack of trust, therefore, reduces the integration of AI into daily life (Gillath et al. Citation2021). Differing constructs have been found to precede trust. For instance, Gillath et al. (Citation2021) studied 248 participants and found that, as familiarity with AI increased, so did trust. Similarly, Platt and Kardia (Citation2015) found that knowledge of AI predicted trust, privacy, benefits, experience and psychosocial factors. However, the effect of trust on AI acceptance differs between contexts and individuals (Kelly, Kaye, and Oviedo-Trespalacios Citation2022).

Many individuals are predisposed to distrust AI. Harrington, Erete, and Piper (Citation2019) researched participatory design methods among underserved populations (e.g. low-income and queer populations). They found that individuals within these communities did not trust how their data would be used (Harrington, Erete, and Piper Citation2019). Harrington, Erete, and Piper (Citation2019) concluded that trusting relationships were needed to facilitate data sharing between researchers and the community. It may be that the participants were especially untrusting due to the historical distrust of institutions, such as technology, that have created trauma in underserved communities. As such, minority status (e.g. race) may influence user willingness to participate in data stewardship for AI. Therefore, demographic information reflecting minority status should be tested in an extended TAM to test if it predicts willingness to provide data for AI.

2.6. Personal characteristics

Personal information, including age and gender identity, may also influence willingness to provide data to AI. The Special Eurobarometer 460 studied individuals (N=27,901) across 28 European countries and found differing attitudes amongst different demographic groups (European Commission Citation2017). Specifically, the study found that respondents who were young, male, well-educated, frequent Internet users, and those with less financial stressors exhibited more positive attitudes towards digitalisation and robots (European Commission Citation2017). It might be suggested that the intersection of these identifiers allows the individual to feel a sense of safety as they are less at threat of job loss or discrimination than their older, female, lower socioeconomic status counterparts (Fietta et al. Citation2022; Srinivasan Citation2021; Walsh Citation2018).

Age has also been found to be a significant predictor of intentions to use and trust AI (Sousa and Beltrão Citation2021). Fietta et al. (Citation2022) found that being older and female were significantly and positively correlated with negative implicit and explicit attitudes towards AI. In another study, Chaudhry, Paquibut, and Chabchoub (Citation2022) studied workers in the United Arab Emirates and explored how their trust in AI influenced their intention to adopt AI at work. The findings revealed a significant difference in trust between age groups. Specifically, Generation X and Millennials trusted AI more than Baby BoomersFootnote1 (Chaudhry, Paquibut, and Chabchoub Citation2022). Sousa and Beltrão (Citation2021) also found that Generation X individuals were more trusting and accepting of AI than older generations (Sousa and Beltrão Citation2021).

Gender can moderate users’ behavioural intentions to use AI (Andrews, Ward, and Yoon Citation2021; K. Liu and Tao Citation2022) and predicts intention to use AI (Guo et al. Citation2015). Yigitcanlar, Degirmenci, and Inkinen (Citation2022) studied 605 Australian adults’ perceptions of AI via an online survey. Data analysis revealed that gender significantly drove perceived AI risk and trust. Specifically, females were more susceptible to AI risks than males (Yigitcanlar, Degirmenci, and Inkinen Citation2022). In another study, Selwyn and Gallo Cordoba (Citation2022) found that males were more likely than females to describe themselves as ‘knowing a lot’ about AI. As data depositaries require fair and non-biased data, it is essential to explore if gender is a significant predictor of willingness and if there is a gender difference between users willing to provide their data to AI.

Despite these findings, contradictory reports have also arisen, with Yang et al. (Citation2019) and Xiang et al. (Citation2020) finding that individuals who identified as males and minorities are likely to choose AI for medical services rather than human practitioners, therefore, indicating that demographic information, especially the intersectionality of multiple demographics, influences intentions of AI technology. However, the influence of demographics on acceptance may differ depending on the service industry (Kelly, Kaye, and Oviedo-Trespalacios Citation2022). Furthermore, we acknowledge that the experience of minorities differs. The research, as mentioned above, elucidates the need to include demographic information in the extended TAM to measure willingness to provide data based on the evidence that factors such as age and gender influence behavioural intentions to use AI.

2.7. Data donation profile

Public participation in AI via PDS has been recommended in recent reports and proposals (Patel et al. Citation2021; Whittlestone et al. Citation2019). However, no existing model measures what factors might predict user willingness to provide their data for AI. Alternatively, to test people’s willingness to donate their data, Skatova and Goulding (Citation2019) developed a Data Donation scale, which contained 18 items that assessed duty, purpose and self-image on a five-point Likert scale from ‘strongly disagree’ to ‘strongly agree’. These factors were based on research that indicated that some individuals feel it is their social responsibility to donate (Mujcic and Leibbrandt Citation2018), that control over data is essential (Bonney et al. Citation2009) and that self-motivating feelings (e.g. positive sense of self that follows donating) drive donation (Andreoni Citation1990; Evans and Ferguson Citation2014; Ferguson Citation2015; Ferguson and Lawrence Citation2016). Preliminary testing of this scale revealed that it explained 62% of the variance in willingness to donate, with good fit statistics (Skatova and Goulding Citation2019).

In their study, Skatova and Goulding (Citation2019) studied 1,300 participants’ intentions and reasons for donating their supermarket loyalty card data to either a cancer research centre, a university medical centre, or a generic charity (Skatova and Goulding Citation2019). The results indicated that over half (55.7%) of the participants elected to donate their data, with the social duty to benefit others as the strongest predictor of donation, suggesting that people have an innate desire to help others (Skatova and Goulding Citation2019). This research supports other studies that found that participants were likely to ‘buy in’ to case studies where sharing their data benefitted society (Centre for Data Ethics and Innovation Citation2021; Gomez Ortega, Bourgeois, and Kortuem Citation2022). Additionally, self-image, duty and understanding of the purpose of the data significantly predicted willingness to donate data, above and beyond personal characteristics, Prosocial Tendencies Measure scales and Self-Report Altruism scales (Skatova and Goulding Citation2019).

Further research is required to assess if findings from the extant literature on data donation can be transferred to other countries and contexts. For instance, Skatova and Goulding (Citation2019) research is limited to a specific context (i.e., health behaviour in the United Kingdom) for donation and does not specify the use of data for AI. As such, research is required to test if these scales are also predictive of willingness to provide data to AI in the context of PDS.

3. Current study

3.1. Objectives

This study offers a broad view of user willingness to participate in data stewardship for AI. Three research questions were proposed to explore user willingness to participate in data stewardship by providing their data to AI. The research questions combined the existing theoretical frameworks of an extended TAM (eTAM) and data donation research to examine which factors predicted user willingness to participate in AI data stewardship ().

Figure 1. Conceptual research model.

Figure 1. Conceptual research model.

This study aimed to explore user willingness to participate in data stewardship for AI in a multi-industry analysis, as the authors’ previous work suggested that a broader enquiry was needed to assess willingness to participate in data stewardship across various industries (Kelly, Kaye, and Oviedo-Trespalacios Citation2022). As such, participants’ willingness to PDS was explored after exposure to one of three written scenarios or a control condition. The scenarios were AI for healthcare, organisational use and educational purposes (see Section 4.3). These industries were selected due to AI’s heightened interest and use in the current literature (Leslie et al. Citation2021; Na et al. Citation2022; Nazaretsky et al. Citation2022). Three research questions were formulated to structure the investigation:

Research Question 1: Would individuals be willing to participate in data stewardship for AI?

Research Question 2: What factors predict user willingness to participate in data stewardship for AI?

Research Question 3: What is the reasoning underlying people’s decisions to provide data to AI?

4. MethodsFootnote2

4.1. Participants and recruitment

We recruited 322 participants aged 18 and older (M age = 28.38 years, SD=16.05, range 18-88 years) from the Australian population. The sample included 213 females (66.1%), 94 males (29.2%), 7 gender non-binary individuals (2.2%), 3 queer individuals (1%), 3 participants preferring not to say (1%), 1 who identified as ‘other’ (0.3%) and 1 transgender individual (0.3%). The participants self-identified their race as white Australian (70.8%), Asian (10.1%), multiple races (6.9%), ‘other’ (4.5%), Asian Indian (2.8%), Hispanic (1.7%), Australian Aboriginal (1%) and African American (1%). The remaining participants preferred not to say (2) or were unsure (1). Recruitment was conducted online and via word-of-mouth. Participants comprised 171 first-year psychology students at Queensland University of Technology recruited through an online university student research management system (SONA) and received 0.5-course credit for survey completion. The remainder (n=151) of the participants were recruited from the general population and were allowed to enter a prize draw with the chance to win one of six $50 (AUD) gift vouchers.

4.2. Measures

4.2.1. Extended Technology Acceptance Model

A seven-point Likert Scale (1 = Strongly disagree, 7 = Strongly agree) was used to assess the extended TAM variables. Two items measured PU, ‘I think AI would be very useful’ and, ‘I think AI would improve/enhance my ability to live’ (Cheng, Lam, and Yeung Citation2006; Davis Citation1985). Two items represented PEOU, ‘I think AI would make my daily life easier’ and, ‘I think interacting with an AI device would be difficult for me’ (Cheng, Lam, and Yeung Citation2006; Davis Citation1985). As per Kelly, Kaye, and Oviedo-Trespalacios (Citation2023), additional variables were assessed alongside the two TAM variables to strengthen the model’s predictability: trust and subjective norms. Two items measured trust, ‘I trust AI to make predictions, recommendations, or decisions influencing real or virtual environments’ and ‘AI is a trustworthy channel for me to share my personal details’ (Choung, David, and Ross Citation2022; Dinev and Hart Citation2006). Three items were employed to assess subjective norm; ‘If people close to me used AI, I would too’, ‘Most people who influence my behaviour would think that I should use AI for daily life’, and ‘Most people whose opinions I value would approve of me using AI for daily life’ (Ajzen Citation1991). For all measures, higher scores represent a higher endorsement of the items. Appendix A lists the measurement items used in the survey and their associated references.

4.2.2. Data Donation scale

The participants’ data donation profile (DDP) was measured via the Data Donation scale, adapted from Skatova and Goulding (Citation2019). This scale contains 18 items, shows a high convergent validity and is a reliable measure of willingness to donate data (Carlo et al. Citation2005; Skatova and Goulding Citation2019). All items were measured on a 7-point Likert Scale (1 = Strongly disagree, 7 = Strongly agree). Five items were used to measure social duty to help others; for example, ‘I would donate my data to AI research because I believe that I have a responsibility to help others’. Seven items measured the participants’ understanding of the purpose of how the data would be used. For instance, ‘Before donating my data to AI research, I would seek to understand the purpose of giving data for research’. Six items measured guilt. For example, ‘If I did not donate my data to AI research, I would feel less guilty if others did the same’. Higher scores represented higher levels of social duty to help others, the need to understand the purpose of how the data would be used and guilt. Appendix B lists the measurement items used in the survey and the associated reference.

4.2.3. Willingness

Willingness to participate in data stewardship for AI was measured on a binary scale (yes or no). As such, the item measured willingness and objection to participate in data stewardship for AI. This factor follows the outcome variable presented in Gursoy et al.’s (Citation2019) AI Device Use Acceptance model. Participants were also prompted to provide a written response to explain their choice.

4.3. Design

This study was a one-way between-subjects experimental design. Participants were presented with the following information,

Technology companies, such as Google and Amazon, collect their own data based on images, videos, text and speech provided to them by users. This data is then utilised to train AI agents by finding patterns and themes within these datasets. For instance, if the machine is asked to recognise a figure three, it will be coded on correct answers (i.e., figure threes) and wrong responses (i.e., other digits). As such, the more figure threes this technology has to learn from, the smarter it grows. However, if the machine is only trained on black images of figure threes, it may not recognise a red figure three and will code it incorrectly.

The participants were then randomly allocated into a condition (health care, organisational use, educational use, or a control condition) and asked to read a corresponding paragraph (Appendix C). This was to provide participants with concrete examples of why PDS is needed and to study if there were any between-group differences. The participants were then asked, ‘Are you willing to give your data to help train AI?’ Participants could choose between yes or no. They were prompted to explain their reasoning in both instances.

4.4. Procedure

The study was approved by the Queensland University of Technology (QUT) Ethics Committee (approval number: 5695). Participants were recruited via QUT classified email list and paid social media, including Facebook, Instagram and SONA. After obtaining participants’ informed consent and ensuring they met the entry criteria (e.g. 18 or older), participants were directed to complete an online Qualtrics survey. First, the survey elicited the participants’ demographic information (e.g. age and gender). The participants were then provided with a written definition of AI, ‘an unnatural object or entity that possesses the ability and capacity to meet, or exceed the requirements of the task it is assigned when considering cultural and demographic circumstances’ (Kelly, Kaye, and Oviedo-Trespalacios Citation2023) and examples of AI (e.g. Siri, chatbots, predictive text) to ensure a standard level of knowledge amongst all participants.

Participants then answered Skatova and Goulding’s (Citation2019) Data Donation Scale. Following this, participants were told that ‘AI research’ was defined as ‘data collection that contributes to the training and development of AI technology’. The participants then completed the extended TAM.

Participants were then randomly allocated into one of four conditions and asked to read an extract on PDS. Each group was presented with a different extract of PDS that included an example of AI in an organisational product (recruitment system), health service (general health practitioner), education and a general scenario (which acted as the control scenario; see Appendix C). Participants were asked if they were willing to give their data to help train AI. They were then prompted to explain (in their own words) why. The online survey was conducted from April 2022 to March 2023.

5. Results

5.1. Quantitative analyses

All data were assessed at a significance value of p<0.05. Descriptive data are presented first, followed by frequency statistics to answer RQ1. A logistical regression is then presented to answer RQ2. The Statistical Package for the Social Sciences (SPSS) Version 28 was used to conduct all analyses for this study. Little’s Missing Completely at Random (MCAR) test revealed that less than 5% of data were missing and that data were missing completely at random, χ2(3, N=322) = 5.42, p=0.143.

5.1.1. Assumptions

Visual assessment of the residual histograms indicated that data were normally distributed. The residual and pairwise scatterplots confirmed linearity. Skewness and kurtosis values were between the recommended +/− 2 (Bowerman and O’Connell Citation1990). Collinearity tests indicated that the assumption of multicollinearity was met (i.e., VIF > 10, Tolerance < 0.1; Bowerman and O’Connell Citation1990). The observations were independent.

5.1.2. Descriptive data

Descriptive statistics of the independent variables are presented in . Reliability was moderate to strong for all scales except PEOU (r=0.11, p=0.047), which signifies a weak correlation. As such, one item (‘I think AI would make my daily life easier’) was chosen to be assessed independently to allow for the inclusion of PEOU in the model.Footnote3 This method has been used in similar scenarios, such as Kaye et al. (Citation2020), which measured PEOU via a single item due to low-reliability scores. highlights that all scales were reliable, and the data were normally skewed.

Table 1. Descriptive and reliability statistics of scales.

5.1.3. Bivariate relationships

The bivariate correlations between the independent and dependent variables can be found in Appendix D. Categorical demographic information (gender, race and sexual orientation) were converted to binary items (e.g. female and other, white Australian and other, straight and other). While we recognise that more than two genders, races and sexualities exist, this was required to conduct the analyses. Age, race and sexual orientation were not significantly related to willingness. Gender, the extended TAM variables (e.g. PU, PEOU-ease, trust and subjective norms) and the data donation variables (e.g. social duty, understanding of purpose and guilt) were significantly and positively related to willingness.

5.1.4. Preliminary data checks

Chi-square analyses were conducted to explore if there were any significant differences between the four conditions in participants’ gender, race, or sexual orientation. The findings revealed no significant difference between genders χ2(3, N=320) = 0.13, p=0.989, race, χ2(3, N=320) = 2.78, p=0.426, or sexual orientation, χ2(3, N=319) = 1.52, p=0.677 for each condition.Footnote4 A one-way ANOVA demonstrated no significant difference in the age of participants between groups, F(3, 319) = 0.15, p=0.931.

A chi-square test revealed no significant difference in willingness to provide data to AI between the three scenarios and control condition, χ2(3, N=278) = 0.66, p=0.883. Given that there were no significant differences in the willingness ratings between participants allocated to the health, organisational, education and general control condition, the subsequent logistic regression was performed using the total sample instead of performing a separate analysis for each condition.

5.1.5. Logistic regression

A binary logistic regression was conducted to measure the predictive power of demographic details (age, gender, race and sexual orientation), the eTAM (PU, PEOU – ease, subjective norm and trust) and DDP (social duty to help, understanding the purpose, guilt and self-image) on willingness to provide data to AI (). An a priori power analysis was conducted using G*Power (Faul et al. Citation2009) to evaluate the sample size for the logistic regression. The observed statistical power was 0.80, α = 0.05 (Beck Citation2013; Cohen Citation2013) for a sample of 213 participants providing evidence for the robustness of the sample size (N=277). The model was a significantly better predictor of willingness than with no predictors added χ2(11, N=277) = 146.77, p<0.001 and explained 55.4% of the variance (Nagelkerke R2 = 0.55). Hosmer and Lemeshow’s test confirmed that the model did not predict outcomes significantly different to the observed χ2(8, N=277) = 5.28, p=0.727. The constructs significantly predicted willingness (p=0.003), with a coefficient of 0.36 (SE=0.12, Wald χ2=8.60, df=1).

Table 2. Logistic regression.

Categorical demographic information (gender, race and sexual orientation) were converted to binary items (e.g. female and other, white Australian and other, straight and other).

5.2. Qualitative analysis

The first author undertook a content analysis to review the responses to the open-ended question, ‘Are you willing to give your data to help train AI? Why?’. The majority (n=278, 86.3%) of participants provided written responses to this question. One hundred sixty-three (58.6%) of those participants responded that yes, they were willing to give data to help train AI, and 115 participants (41.4%) responded that no, they were not willing to give data to help train AI. The open-ended responses were compiled into a Microsoft Excel document and classified into themes by reviewing the frequency of the content reported by participants. The first author identified the themes by reviewing the frequency of participant responses and discussed with all other authors. displays the themes identified from the responses. The qualitative responses were consistent across the scenarios. To protect participants’ anonymity, all quotes are cited in terms of the gender and age of participants (e.g. F, 18 is an 18-year-old female).

Table 3. Primary qualitative themes.

6. Discussion

This study extended the TAM to create a model of the factors contributing to individuals’ willingness to provide their data to AI. Over half of the participants indicated they were willing to provide their data to AI by participating in data stewardship. The model further explores this finding, demonstrating that trust, benefitting society and understanding of the purpose predict user willingness to provide their data to AI. Finally, the qualitative data further provide insight into the reasoning underlying people’s decisions to provide their data.

6.1. Individuals’ willingness to participate in data stewardship

Data analysis revealed that more than half of the participants (58.6%) were willing to provide their data to AI. This is a positive finding for AI developers and researchers looking to engage individuals to provide their data. Furthermore, this finding suggests that people are willing to provide their data, which can reduce the existing biases due to the underrepresentation and misrepresentation of minorities. Stakeholders such as researchers and developers should use the following information to make informed decisions in how they recruit individuals for PDS and develop their products.

6.2. Factors that predict user willingness

6.2.1. Trust

As willingness is a pivotal factor underlining decision-making, analysis of the model allows us to understand the elements that drive an individual’s decision to participate in data stewardship by providing their data to AI (Thornton, Gibbons, and Gerrard Citation2002). Fitting with prior research (e.g. Choi Citation2020; Lockey, Gillespie, and Curtis Citation2020), trust was a significant positive predictor of willingness to provide data to AI. This finding indicates that, as trust in AI increases, so does willingness to provide data to AI. Similarly, Stracke (Citation2020) writes that the existence of open science facilitates reliability and trust. As such, participatory methods and trust may have a complementary relationship. AI companies must build trust with their consumers to facilitate increased willingness to collaborate via data submission. Continuing relationships, communication, consistent messaging and action, and regulation are required to ensure public trust Peppin (Citation2022). This finding aligns with research demonstrating a significant positive relationship between trust and AI acceptance (Choi Citation2020; Kelly, Kaye, and Oviedo-Trespalacios Citation2022, Citation2023). Sloane et al. (Citation2020) state that ongoing relationships based on mutual benefit are required to promote and maintain trust between all design and use process members. As such, companies, researchers and governments should build trustworthy data ecosystems to protect against public harm and resistance.

On the other hand, the significance of trust in the model also indicates that distrust in AI can reduce the willingness to participate in data stewardship for AI. As Kaplan et al. (Citation2021) write, distrust in AI refers to the fear of the adverse outcome of the system failing to perform its expected task. For instance, one may distrust the output of a chatbot, potentially reducing use behaviour. Content analysis of qualitative responses showed that 23 participants (20% of those unwilling to provide data) objected to providing their data to AI due to distrust (see ). This finding fits with the quantitative data, which indicated that trust was a significant positive predictor of willingness. Therefore, it stands to reason that people are less willing to provide their data to AI if trust is reduced.

The finding that there was no difference between responses in each condition is noteworthy considering those allocated to the healthcare condition who stated they did not trust the depository of the data contradict the results of prior research, which has shown that health institutions are the most trusted public institutions (Centre for Data Ethics and Innovation Citation2022). This may be due to the perceived shift from trusting a one-on-one practitioner to distrusting a broader scope of intermediaries, such as large biobanks and big tech (Platt and Kardia Citation2015). In light of this finding, governmental agencies and private companies must ensure data protection for consumers to safeguard their data, facilitating trust (Richter et al. Citation2021).

6.2.2. Social duty

Social duty to help others positively and significantly predicted individuals’ willingness to provide their data to AI. Therefore, individuals are more likely to participate in data stewardship if they feel their contribution benefits society. This result fits Skatova and Goulding (Citation2019) finding that social duty to benefit others was the strongest predictor of data donation. Interestingly, PU was not a significant predictor in our model, while social duty was a significant predictor of willingness to provide data to AI. It may be that in the context of PDS, consumers care more about how their data serve others than how it serves their interests. This finding contests prior research that states that people engage in prosocial behaviours due to the desire to improve their social image (Luo and Gao Citation2022; Septianto et al. Citation2020) and may be due to the difference in donating data compared to providing data for PDS which offers benefits such as control (i.e., choosing the beneficiary) and potential compensation. The finding of trust and social duty as significant themes parallels the Deloitte and Reform (Citation2018) finding that trust in governmental use of data is driven by the belief that data is used for the benefit of society. As such, it is suggested that companies and organisations wishing to engage consumers in data stewardship should promote how PDS provides societal benefit to encourage this perspective alongside trust.

The qualitative evidence further supports that individuals are willing to participate in data stewardship for AI when they feel it is their social duty to benefit others. Benefitting society was the most frequent theme among participants willing to participate in data stewardship for AI (52 respondents; see ). While the idea of participatory design in health systems is not new (Bietz, Patrick, and Bloss Citation2019; Donia and Shaw Citation2021), it is interesting to note that similar themes from health donation research (e.g. blood donation) transfer to AI research (White, Poulsen, and Hyde Citation2017). Collectively, this research endorses the importance of social benefit to those willing to participate in data stewardship by providing their data to AI.

The qualitative theme of benefitting society frequently overlapped with a similar theme of willingness to provide data to advance technology development across all scenarios. Here, participants demonstrated their logic that more data would create more equitable and robust AI systems, ultimately benefiting society. This theme was apparent across all conditions. The prevalence of individuals willing to provide their data to AI to benefit society and advance technology acceptance points to a desire to serve others. As such, stakeholders such as researchers and developers should aim to prioritise societal interests over individual or organisational gains.

6.2.3. Understanding the purpose

The need to understand the purpose of how data would be used significantly and positively predicted willingness to engage in PDS. As such, participants indicated they would decide to provide data to AI depending on the information provided about the AI and how it would be used. This finding suggests individuals require increased understanding of PDS and are interested in the outcome and re-purposing of their data after providing it to intermediaries. This result links to recent research in the field, which calls for increased involvement from key stakeholders, such as consumers, in the interpretation and end-use of their data (Araujo et al. Citation2022; Harrington, Erete, and Piper Citation2019; Sloane Citation2019; Sloane et al. Citation2020). We recommend that companies consider offering educational opportunities and ongoing communication through outreach programs to promote data literacy tailored to the project (Ridsdale et al. Citation2015). This will enhance their comprehension of the project’s data-related objectives.

The significant positive relationship between understanding the purpose of how the data would be used and willingness to provide data to AI can also be identified in the qualitative data as participants expressed both that they were (i) willing to provide their data in the correct context, and that they (ii) objected to PDS due to a lack of knowledge of how it would be used and concerns of privacy breaches. Therefore, as an understanding of the purpose of the data decreases, so does the willingness to provide data for PDS (and vice versa). While prior studies have assessed how pre-existing knowledge of AI impacts trust (Chaudhry, Paquibut, and Chabchoub Citation2022) and acceptance (Seo and Lee Citation2021) of AI, this theme appears concerned with knowledge of how the data are used to inform AI after it is submitted.

While it could be anticipated that the need to understand the purpose of the data would be more apparent in the control condition, as participants were given less information than in the healthcare, organisational and education scenarios, it was evident in all scenarios. Overall, individuals may need to be informed of the use of their data or be allowed to control where it goes and how it is interpreted to increase knowledge and, consequently, willingness. Subsequently, it is recommended that more transparency is provided around how data is used, whom it is used by, the access other companies have to it, any risks or benefits and their control over it. Thereby ensuring that users understand the implications of sharing their data.

6.3. The influence of other predictors

6.3.1. Personal characteristics

It is necessary to discuss the non-significant predictors to create a comprehensive overview of the context of the model. Age, race and sexual orientation were not significantly correlated to willingness, and no personal characteristics significantly predicted willingness in the model. While research points to demographic information influencing user attitudes towards AI (Sousa and Beltrão Citation2021; Yang et al. Citation2019), this research is contested by the current study demonstrating these factors did not emerge as influential in the model. It may be that personal characteristics become non-significant when included alongside other significant predictors, such as trust and social duty, which may account for these factors. However, as much of the cited research came from studies assessing AI attitudes (European Commission Citation2017) and intentions to adopt AI (Chaudhry, Paquibut, and Chabchoub Citation2022), it may be that these topics diverge from AI PDS research regarding the importance of personal characteristics in predicting willingness. Further research is, therefore, required to substantiate this finding in the context of willingness to provide data for AI PDS.

6.3.2. TAM

Refuting the TAM, neither PU nor PEOU were significant predictors of willingness to provide data for AI via data stewardship. This result contradicts previous research demonstrating the significance of PU and PEOU in predicting behavioural intentions to use AI (Gado et al. Citation2021; Kelly, Kaye, and Oviedo-Trespalacios Citation2022). However, unlike these studies, the current research explored user willingness to provide data towards AI via PDS. As such, it may be that PU and PEOU do not predict willingness in this instance. As the dependent variable is more concerned with providing data for AI than using AI, it may be that these variables are less meaningful than when they are used in models that predict the use behaviour of existing AI technology, such as chatbots. More research is needed to test further the applicability of technology acceptance models, such as the TAM, on PDS research for AI.

6.3.3. Guilt

Guilt was the single theme from Skatova and Goulding (Citation2019) scale that was not a significant predictor of willingness. It could be that, while individuals feel guilt when objecting to donating health data, they can differentiate this emotion from their willingness (or objection) to providing data towards AI research. As AI research encompasses a broad range of activities and purposes, the move away from altruistic outcomes (e.g. helping cancer research) could result in this conflict. Further research is required to understand the motivations underlying user reasoning to give data in different contexts.

6.4. User reasoning to provide data to AI

In addition to the themes that align with the significant quantitative predictors, the content analysis revealed additional themes not captured by the model. For individuals willing to provide their data to AI, 10 stated that this was conditional to being financially compensated. This diverges from the finding that individuals are willing to provide their data due to an altruistic desire to benefit society (Skatova and Goulding Citation2019). As such, organisations, researchers and governments wishing to elicit user data for AI may have to pay in some circumstances. Nine other participants stated they were interested in the data’s outcome and would provide it out of curiosity. The remaining participants said they were indifferent to the topic as they felt it was inevitable. Thus, some people may feel apathetic and be willing to provide their data as they believe it is the path of least resistance. Organisations and researchers seeking to elicit user data for AI can use this information to guide the circumstances in which individuals are willing to participate in data stewardship for AI.

Alternatively, participants who responded that they were not willing to participate in data stewardship listed additional themes such as fear and ethical discomfort with AI. For instance, three participants stated they were against big-technology firms and did not want to support their development. However, under the right circumstances (e.g. not-for-profits), some said they would be willing to provide their data. This is hopeful for organisations and governments looking to elicit user data for philanthropic uses. In light of this research, stakeholders must ensure that users understand the context and purpose of the data to assure them against threats such as providing their data to depositories they do not wish to support.

6.5. Limitations and future recommendations

Limitations must be considered when interpreting the findings of this study. As this paper addresses a novel topic, no models exist to measure willingness to provide data to participate in data stewardship for AI. Therefore, this study used a scale to assess an individual’s DDP (Skatova and Goulding Citation2019). However, in this study, we assessed willingness to provide data. This differs from donating, as there is the expectation of being offered something in return (e.g. compensation). Furthermore, PDS includes the ability to control the beneficiary of the data and remove the data if desired. Despite this, two of the three predictors in this model were significant in the logistic regression, demonstrating the transference between donation and PDS research. On the other hand, the TAM was not a significant predictor in the logistic regression, signifying that it may not be a complete model for assessing user willingness in the context of PDS. This may be due to using a model that studies behavioural intentions to use technology, in contrast to our dependent variable of willingness to provide data. It is recommended that future research adapt theoretical models from donation literature rather than technology acceptance models, which may not be transferable to study user willingness in this context.

As this study relied on convivence sampling, over half of the sample were first-year psychology students. While some are critical of the use of students as study subjects, previous studies have found that younger adults (e.g. university students) express similar attitudes to older adults (Hoofnagle et al. Citation2010) and that there is no significant difference between students and non-student samples when researching technology use behaviour (Nadkarni and Gupta Citation2007). However, it must be noted that this study and the prior research cited were conducted in Westernised countries with educated subjects and may only apply to some cultures. To explore a more diverse range of participants, it is suggested that future research employ alternative sampling techniques and data collection methods.

7. Conclusion

The present study considered the influence of personal characteristics, an extended TAM (with trust and subjective norms) and data donation profile on user willingness to provide data to AI via PDS. To the best of our knowledge, this is the first paper to explore what psychosocial factors predict participatory behaviour in the context of AI. This study confirms that users must trust and comprehend the broader societal impact of AI when providing their data. As such, AI developers should value and promote the wider societal influence of their technology to facilitate an understanding of the benefits of providing data to AI. Furthermore, trust should be fostered between users and AI via the validity and reliability of the technology and the organisation/s. The research also demonstrates that traditional technology acceptance models, such as the TAM, may not provide a comprehensive overview of human behaviour in the context of willingness to provide data. These results contribute to the theoretical literature and can guide organisations, researchers and governments looking to strengthen their AI models via the responsible collection and utilisation of user data.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Notes

1 Generation X are individuals born from 1965 to 1980; Millennials are born from 1981 to 1996; Baby Boomers are born from 1946 to 1964.

2 This paper is a part of a larger research program.

3 This item was selected as the other PEOU item produced spurious results.

4 Please note that gender, race, and sexual orientation were measured as binary items.

References

  • Ajzen, I. 1991. “The Theory of Planned Behavior.” Organizational Behavior and Human Decision Processes 50 (2): 179–211. doi:10.1016/0749-5978(91)90020-T.
  • Ajzen, I., and M. Fishbein. 1975. “A Bayesian Analysis of Attribution Processes.” Psychological Bulletin 82 (2): 261–277. doi:10.1037/h0076477.
  • Andreoni, J. 1990. “Impure Altruism and Donations to Public Goods: A Theory of Warm-Glow Giving.” The Economic Journal 100 (401): 464–477. doi:10.2307/2234133.
  • Andrews, J.E., H. Ward, and J. Yoon. 2021. “UTAUT as a Model for Understanding Intention to Adopt AI and Related Technologies among Librarians.” The Journal of Academic Librarianship 47 (6): 102437. doi:10.1016/j.acalib.2021.102437.
  • Araujo, T., J. Ausloos, W. van Atteveldt, F. Loecherbach, J. Moeller, J. Ohme, D. Trilling, B. van de Velde, C. de Vreese, and K. Welbers. 2022. “Osd2f: An Open-Source Data Donation Framework.” Computational Communication Research 4 (2): 372–387. doi:10.31235/osf.io/xjk6t.
  • Beck, T.W. 2013. “The Importance of a Priori Sample Size Estimation in Strength and Conditioning Research.” Journal of Strength and Conditioning Research 27 (8): 2323–2337. doi:10.1519/JSC.0b013e318278eea0.
  • Belanche, D., L.V. Casalo, and C. Flavian. 2019. “Artificial Intelligence in FinTech: understanding Robo-Advisors Adoption among Customers.” Industrial Management & Data Systems 119 (7): 1411–1430. doi:10.1108/IMDS-08-2018-0368.
  • Bietz, M., K. Patrick, and C. Bloss. 2019. “Data Donation as a Model for Citizen Science Health Research.” Citizen Science: Theory and Practice 4 (1): 1–11. doi:10.5334/cstp.178.
  • Birhane, A., W. Isaac, V. Prabhakaran, M. Díaz, M.C. Elish, I. Gabriel, and S. Mohamed. 2022. “Power to the People? Opportunities and Challenges for Participatory AI.” Equity and Access in Algorithms, Mechanisms, and Optimization 1–8. doi:10.1145/3551624.3555290.
  • Bonney, R., H. Ballard, R. Jordan, E. McCallie, T. Phillips, J. Shirk, C. Wilderman. 2009. Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. A CAISE Inquiry Group Report.A CAISE Inquiry Group Report.
  • Bowerman, B.L., and R.T. O’Connell. 1990. Linear Statistical Models: An Applied Approach. USA: Brooks/Cole.
  • Buolamwini, J., and T. Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Paper presented at the Conference on Fairness, Accountability and Transparency, April, 2020. https://proceedings.mlr.press/v81/buolamwini18a.html
  • Carlo, G., M.A. Okun, G.P. Knight, and M.R.T. de Guzman. 2005. “The Interplay of Traits and Motives on Volunteering: Agreeableness, Extraversion and Prosocial Value Motivation.” Personality and Individual Differences 38 (6): 1293–1305. doi:10.1016/j.paid.2004.08.012.
  • Centre for Data Ethics and Innovation. 2021. “BritainThinks: Trust in Data” Accessed December, 2022. https://www.gov.uk/government/publications/britainthinks-trust-in-data
  • Centre for Data Ethics and Innovation. 2022. “Public Attitudes to Data and AI: Tracker Survey.” Accessed March, 2023. https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey
  • Chan, A., C.T. Okolo, Z. Terner, and A. Wang. 2021. “The Limits of Global Inclusion in AI Development.” Accessed March, 2022. https://arxiv.org/abs/2102.01265
  • Chaudhry, I.S., R.Y. Paquibut, and H. Chabchoub. 2022. “Factors Influencing Employees Trust in AI; It’s Adoption at Work: Evidence from United Arab Emirates.” In 2022 International Arab Conference on Information Technology (ACIT), 1–7. IEEE. doi:10.1109/acit57182.2022.9994226.
  • Cheng, T.E., D.Y. Lam, and A.C. Yeung. 2006. “Adoption of Internet Banking: An Empirical Study in Hong Kong.” Decision Support Systems 42 (3): 1558–1572. doi:10.1016/j.dss.2006.01.002.
  • Choi, Y. 2020. “A Study of Employee Acceptance of Artificial Intelligence Technology.” European Journal of Management and Business Economics 30 (3): 318–330. doi:10.1108/EJMBE-06-2020-0158.
  • Choung, H., P. David, and A. Ross. 2022. “Trust in ai and Its Role in the Acceptance of ai Technologies.” International Journal of Human–Computer Interaction 39 (9): 1727–1739. doi:10.1080/10447318.2022.2050543.
  • Chow, A., and B. Perrigo. 2023. “The AI Arms Race is Changing Everything.” Time. Accessed February, 2023. https://time.com/6255952/ai-impact-chatgpt-microsoft-google/.
  • Cohen, J. 2013. Statistical Power Analysis for the Behavioral Sciences. USA: Academic Press.
  • Couldry, N., C. Rodriguez, G. Bolin, J. Cohen, G. Goggin, M.M. Kraidy, K. Iwabuchi, K.-S. Lee, J. Qiu, and I. Volkmer. 2018. “Inequality and Communicative Struggles in Digital Times: A Global Report on Communication for Social Progress.” Accessed January, 2023. https://repository.upenn.edu/cgi/viewcontent.cgi?article=1001&context=cargc_strategicdocuments
  • Davis, F.D. 1985. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. USA: Massachusetts Institute of Technology.
  • Davis, F.D. 1989. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly 13 (3): 319–340. doi:10.2307/249008.
  • Deloitte and Reform. 2018. “Citizens, Government and Business: The State of the State 2017-18.” Accessed January, 2023. https://www2.deloitte.com/content/dam/Deloitte/uk/Documents/public-sector/deloitte-uk-the-state-of-the-state-report-2017.pdf
  • Dinev, T., and P. Hart. 2006. “An Extended Privacy Calculus Model for e-Commerce Transactions.” Information Systems Research 17 (1): 61–80. doi:10.1287/isre.1060.0080.
  • Donia, J., and J.A. Shaw. 2021. “Co-Design and Ethical Artificial Intelligence for Health: An Agenda for Critical Research and Practice.” Big Data & Society 8 (2): 205395172110652. doi:10.1177/20539517211065248.
  • European Commission. 2017. “Special Eurobarometer 460-Attitudes towards the Impact of Digitisation and Automation on Daily Life.” Accessed January, 2023. https://ec.europa.eu/info/departments/communication
  • European Commission. 2022. “The Digital Services Act Package.” Accessed March, 2023. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
  • Evans, R., and E. Ferguson. 2014. “Defining and Measuring Blood Donor Altruism: A Theoretical Approach from Biology, Economics and Psychology.” Vox Sanguinis 106 (2): 118–126. doi:10.1111/vox.12080.
  • Eyers, J. 2021. Banks Warned using AI in Loan Assessments could ‘Awaken a Zombie’. Australian Financial Review. https://www.afr.com/companies/financial-services/banks-warned-using-ai-in-loan-assessments-could-awaken-a-zombie-20210615-p5814i
  • Faul, F., E. Erdfelder, A. Buchner, and A.-G. Lang. 2009. “Statistical Power Analyses Using G*Power 3.1: Tests for Correlation and Regression Analyses.” Behavior Research Methods 41 (4): 1149–1160. doi:10.3758/BRM.41.4.1149.
  • Ferguson, E. 2015. “Mechanism of Altruism Approach to Blood Donor Recruitment and Retention: A Review and Future Directions.” Transfusion Medicine 25 (4): 211–226. doi:10.1111/tme.12233.
  • Ferguson, E., and C. Lawrence. 2016. “Blood Donation and Altruism: The Mechanisms of Altruism Approach.” ISBT Science Series 11 (S1): 148–157. doi:10.1111/voxs.12209.
  • Fietta, V., F. Zecchinato, B. Di Stasi, M. Polato, and M. Monaro. 2022. “Dissociation between Users’ Explicit and Implicit Attitudes toward Artificial Intelligence: An Experimental Study.” IEEE Transactions on Human-Machine Systems 52 (3): 481–489. doi:10.1109/THMS.2021.3125280.
  • Fishbein, M. 2008. “A Reasoned Action Approach to Health Promotion.” Medical Decision Making: An International Journal of the Society for Medical Decision Making 28 (6): 834–844. doi:10.1177/0272989x08326092.
  • Fishbein, M., I. Ajzen, and A. Belief. 1975. Intention and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley.
  • Gado, S., R. Kempen, K. Lingelbach, and T. Bipp. 2021. “Artificial Intelligence in Psychology: How Can we Enable Psychology Students to Accept and Use Artificial Intelligence?.” Psychology Learning & Teaching 21 (1): 37–56. doi:10.1177/14757257211037149.
  • Gefen, D., Karahanna, E., & Straub, D.W. (2003). Trust and Tam in Online Shopping: An Integrated Model. MIS Quarterly, 27(1): 51–90. doi:10.2307/30036519.
  • Gibbons, F.X., M. Gerrard, H. Blanton, and D.W. Russell. 1998. “Reasoned Action and Social Reaction: Willingness and Intention as Independent Predictors of Health Risk.” Journal of Personality and Social Psychology 74 (5): 1164–1180. doi:10.1037/0022-3514.74.5.1164.
  • Gillath, O., T. Ai, M.S. Branicky, S. Keshmiri, R.B. Davison, and R. Spaulding. 2021. “Attachment and Trust in Artificial Intelligence.” Computers in Human Behavior 115: 106607. doi:10.1016/j.chb.2020.106607.
  • Gomez Ortega, A., J. Bourgeois, and G. Kortuem. 2022. “Reconstructing Intimate Contexts through Data Donation: A Case Study in Menstrual Tracking Technologies.” Paper presented at the NordiCHI '22: Nordic Human-Computer Interaction Conference, Aarhus, Denmark, October 2022. doi:10.1145/3546155.3546646.
  • Goyal, M., T. Knackstedt, S. Yan, and S. Hassanpour. 2020. “Artificial Intelligence-Based Image Classification Methods for Diagnosis of Skin Cancer: Challenges and Opportunities.” Computers in Biology and Medicine 127: 104065.doi:10.1016/j.compbiomed.2020.104065. PMC: 33246265.
  • Guo, X., X. Han, X. Zhang, Y. Dang, and C. Chen. 2015. “Investigating m-Health Acceptance from a Protection Motivation Theory Perspective: Gender and Age Differences.” Telemedicine Journal and e-Health 21 (8): 661–669. doi:10.1089/tmj.2014.0166.
  • Gursoy, D., O.H. Chi, L. Lu, and R. Nunkoo. 2019. “Consumers Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery.” International Journal of Information Management 49: 157–169. doi:10.1016/j.ijinfomgt.2019.03.008.
  • Harrington, C., S. Erete, and A.M. Piper. 2019. “Deconstructing Community-Based Collaborative Design: Towards More Equitable Participatory Design Engagements.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 1–25. doi:10.1145/3359318.
  • Hoofnagle, C.J., J. King, S. Li, and J. Turow. 2010. “How Different Are Young Adults from Older Adults When It Comes to Information Privacy Attitudes and Policies?” SSRN Electronic Journal 1–20. doi:10.2139/ssrn.1589864.
  • Jarrahi, M.H., C. Lutz, K. Boyd, C. Oesterlund, and M. Willis. 2023. “Artificial Intelligence in the Work Context.” Journal of the Association for Information Science and Technology 74 (3): 303–310. doi:10.1002/asi.24730.
  • Kaplan, A.D., T.T. Kessler, J.C. Brill, and P. Hancock. 2021. “Trust in Artificial Intelligence: Meta-Analytic Findings.” Human Factors 65 (2): 337–359. doi:10.1177/00187208211013988.
  • Kapoor, A., and R.S. Whitt. 2021. “Nudging towards Data Equity: The Role of Stewardship and Fiduciaries in the Digital Economy.” SSRN Electronic Journal 1–18. doi:10.2139/ssrn.3791845.
  • Kaye, S.-A., I. Lewis, S. Forward, and P. Delhomme. 2020. “A Priori Acceptance of Highly Automated Cars in Australia, France, and Sweden: A Theoretically-Informed Investigation Guided by the TPB and UTAUT.” Accident; Analysis and Prevention 137: 105441. doi:10.1016/j.aap.2020.105441.
  • Kelly, S., S.-A. Kaye, and O. Oviedo-Trespalacios. 2022. “A Multi-Industry Analysis of the Future Use of AI Chatbots.” Human Behavior and Emerging Technologies 2022: 1–14. doi:10.1155/2022/2552099.
  • Kelly, S., S.-A. Kaye, and O. Oviedo-Trespalacios. 2023. “What Factors Contribute to the Acceptance of Artificial Intelligence? A Systematic Review.” Telematics and Informatics 77: 101925. doi:10.1016/j.tele.2022.101925.
  • Lashbrook, A. 2018. Ai-driven Dermatology could Leave Dark-skinned Patients Behind. The Atlantic. https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/
  • Lawrence, N., and N. Oh. 2021. “Enabling Data Sharing for Social Benefit through Data Trusts.” Accessed March, 2023. https://gpai.ai/projects/data-governance/data-trusts/enabling-data-sharing-for-social-benefit-through-data-trusts.pdf
  • Leslie, D., A. Mazumder, A. Peppin, M.K. Wolters, and A. Hagerty. 2021. “Does “AI” Stand for Augmenting Inequality in the Era of Covid-19 Healthcare?” BMJ (Clinical Research ed.) 372: n304. doi:10.1136/bmj.n304.
  • Lin, H.-C., T. Yun-Fang, H. Gwo-Jen, and H. Hsin. 2021. “From Precision Education to Precision Medicine: Factors Affecting Medical Staff’s Intention to Learn to Use AI Applications in Hospitals.” Journal of Educational Technology & Society 24 (1): 123–137. https://www.jstor.org/stable/26977862.
  • Liu, K., and D. Tao. 2022. “The Roles of Trust, Personalization, Loss of Privacy, and Anthropomorphism in Public Acceptance of Smart Healthcare Services.” Computers in Human Behavior 127: 107026. doi:10.1016/j.chb.2021.107026.
  • Liu, Z., J. Shan, and Y. Pigneur. 2016. “The Role of Personalized Services and Control: An Empirical Evaluation of Privacy Calculus and Technology Acceptance Model in the Mobile Context.” Journal of Information Privacy and Security 12 (3): 123–144. doi:10.1080/15536548.2016.1206757.
  • Lockey, S., N. Gillespie, and S. Curtis. 2020. Trust in Artificial Intelligence: Australian Insights. Accessed March, 2023.
  • Luo, J., and G. Gao. 2022. “Donor Recognition: A Double‐Edged Sword in Charitable Giving.” Journal of Philanthropy and Marketing 28 (1): 1–20. doi:10.1002/nvsm.1772.
  • McMahon, R., and M. Byrne. 2008. “Predicting Donation among an Irish Sample of Donors and Nondonors: Extending the Theory of Planned Behavior.” Transfusion 48 (2): 321–331. doi:10.1111/j.1537-2995.2007.01526.x.
  • Memon, A.M., and A. Memon. 2021. “Exploring Acceptance of Artificial Intelligence Amongst Healthcare Personnel: A Case in a Private Medical Centre.” International Research Journal of Modernization in Engineering Technology and Science 3 (9): 1–11.
  • Meyer-Waarden, B.L., and J. Cloarec. 2021. “Baby, You Can Drive my Car”: Psychological Antecedents That Drive Consumers’ Adoption of AI-Powered Autonomous Vehicles [Article].” Technovation 109: 102348. doi:10.1016/j.technovation.2021.102348.
  • Monteith, S., T. Glenn, J. Geddes, P.C. Whybrow, E. Achtyes, and M. Bauer. 2022. “Expectations for Artificial Intelligence (AI) in Psychiatry.” Current Psychiatry Reports 24 (11): 709–721. doi:10.1007/s11920-022-01378-5.
  • Mujcic, R., and A. Leibbrandt. 2018. “Indirect Reciprocity and Prosocial Behaviour: Evidence from a Natural Field Experiment.” The Economic Journal 128 (611): 1683–1699. doi:10.1111/ecoj.12474.
  • Mun, Y.Y., J.D. Jackson, J.S. Park, and J.C. Probst. 2006. “Understanding Information Technology Acceptance by Individual Professionals: Toward an Integrative View.” Information & Management 43 (3): 350–363. doi:10.1016/j.im.2005.08.006.
  • Na, S., S. Heo, S. Han, Y. Shin, and Y. Roh. 2022. “Acceptance Model of Artificial Intelligence (ai)-Based Technologies in Construction Firms: Applying the Technology Acceptance Model (Tam) in Combination with the Technology–Organisation–Environment (Toe) Framework.” Buildings 12 (2): 90. doi:10.3390/buildings12020090.
  • Nadkarni, S., & Gupta, R. (2007). “A Task-Based Model of Perceived Website Complexity.” MIS Quarterly 31 (3): 501–524. doi:10.2307/25148805.
  • Nazaretsky, T., M. Ariely, M. Cukurova, and G. Alexandron. 2022. “Teachers’ Trust in AI‐Powered Educational Technology and a Professional Development Program to Improve It.” British Journal of Educational Technology 53 (4): 914–931. doi:10.1111/bjet.13232.
  • Parkes, E., J. Hardinges, J. Crowe, J. Massey, and S. Moriniere. 2023. “Defining Responsible Data Stewardship.” Accessed April 2023. https://theodi.org/insights/reports/defining-responsible-data-stewardship/
  • Patel, R., A. Peppin, V. Pavel, J. Brennan, I. Parker, and C. Safak. 2021. “Participatory Data Stewardship.” Accessed March, 2023. https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/
  • Peppin, A. 2022. “Who Cares What the Public Think?” https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/
  • Piantadosi, S.T. 2023. [@spiantado]. Yes, ChatGPT Is Amazing and Impressive. No, @OpenAI Has Not Come Close to Addressing the Problem of Bias. Filters Appear to be Bypassed with Simple Tricks, and Superficially Masked. And What Is Lurking Inside Is Egregious. Twitter. Accessed March 2023.
  • Pilz, D., and H. Gewald. 2013. “Does Money Matter? Motivational Factors for Participation in Paid-and Non-Profit-Crowdsourcing Communities.” Paper presented at the 11th International Conference on Wirtschaftsinformatik, Leipzig, March 2023. https://twitter.com/spiantado/status/1599462375887114240?lang=en. https://www.researchgate.net/publication/270811012_Does_Money_Matter_Motivational_Factors_for_Participation_in_Paid-and_Non-Profit-Crowdsourcing_Communities
  • Platt, J., and S. Kardia. 2015. “Public Trust in Health Information Sharing: Implications for Biobanking and Electronic Health Record Systems.” Journal of Personalized Medicine 5 (1): 3–21. doi:10.3390/jpm5010003.
  • Pomery, E.A., F.X. Gibbons, M. Reis-Bergan, and M. Gerrard. 2009. “From Willingness to Intention: Experience Moderates the Shift from Reactive to Reasoned Behavior.” Personality & Social Psychology Bulletin 35 (7): 894–908. doi:10.1177/0146167209335166.
  • Rafique, H., A.O. Almagrabi, A. Shamim, F. Anwar, and A.K. Bashir. 2020. “Investigating the Acceptance of Mobile Library Applications with an Extended Technology Acceptance Model (Tam).” Computers & Education 145: 103732. doi:10.1016/j.compedu.2019.103732.
  • Richter, G., C. Borzikowsky, B.F. Hoyer, M. Laudes, and M. Krawczak. 2021. “Secondary Research Use of Personal Medical Data: Patient Attitudes towards Data Donation.” BMC Medical Ethics 22 (1): 164. doi:10.1186/s12910-021-00728-x.
  • Ridsdale, C., J. Rothwell, M. Smit, H. Ali-Hassan, M. Bliemel, D. Irvine, D. Kelley, S. Matwin, and B. Wuetherick. 2015. “Strategies and Best Practices for Data Literacy Education: Knowledge Synthesis Report.” Dalhouse University. doi:10.13140/RG.2.1.1922.5044.
  • Robertson, A., and M. Maccarone. 2022. “AI Narratives and Unequal Conditions. Analyzing the Discourse of Liminal Expert Voices in Discursive Communicative Spaces.” Telecommunications Policy 47 (5): 102462. doi:10.1016/j.telpol.2022.102462.
  • Schmidthuber, L., D. Hilgers, and K. Randhawa. 2021. “Public Crowdsourcing: Analyzing the Role of Government Feedback on Civic Digital Platforms.” Public Administration 100 (4): 960–977. doi:10.1111/padm.12811.
  • Seger, E., A. Ovadya, B. Garfinkel, D. Siddarth, and A. Dafoe. 2023. “Democratising AI: Multiple Meanings, Goals, and Methods.” In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 715–722. doi:10.48550/arXiv.2303.12642.
  • Selwyn, N., and B. Gallo Cordoba. 2022. “Australian Public Understandings of Artificial Intelligence.” AI & Society 37 (4): 1645–1662. doi:10.1007/s00146-021-01268-z.
  • Seo, K.H., and J.H. Lee. 2021. “The Emergence of Service Robots at Restaurants: Integrating Trust, Perceived Risk, and Satisfaction.” Sustainability, 13 (8): 4431. doi:10.3390/su13084431.
  • Septianto, F., F. Tjiptono, W. Paramita, and T.M. Chiew. 2020. “The Interactive Effects of Religiosity and Recognition in Increasing Donation.” European Journal of Marketing 55 (1): 1–26. doi:10.1108/EJM-04-2019-0326.
  • Singh, S., and N. Ramakrishnan. 2023. Is ChatGPT Biased? A Review. doi:10.31219/osf.io/9xkbu.
  • Skatova, A., and J. Goulding. 2019. “Psychology of Personal Data Donation.” PLoS One 14 (11): e0224240. doi:10.1371/journal.pone.0224240.
  • Sloane, M. 2019. “Inequality is the Name of the Game. Thoughts on the Emerging Field of Technology, Ethics and Social Justice.” Paper presented at the Weizenbaum Conference 2019 Challenges of Digital Inequity, Berlin, Germany. doi:10.34669/wi.cp.
  • Sloane, M., E. Moss, O. Awomolo, and L. Forlano. 2020. “Participation Is Not a Design Fix for Machine Learning.” In Equity and Access in Algorithms, Mechanisms, and Optimization, 1–6. Washington, DC. doi:10.1145/3551624.3555285.
  • Song, Y.W. 2019. “User Acceptance of an Artificial Intelligence (AI) Virtual Assistant: An Extension of the Technology Acceptance Model.” Thesis, DDU. https://search.ebscohost.com/login.aspx?direct=true&db=ddu&AN=194F628A9A233EB6&site=ehost-live&scope=site.
  • Sousa, S., and G. Beltrão. 2021. “Factors Influencing Trust Assessment in Technology.” Paper presented at the IFIP Conference on Human-Computer Interaction, Proceedings, Part V, August 30 – September 3, 2021, Bari, Italy. Accessed March, 2022. https://doi.org/10.1007/978-3-030-85607-6_49#Sec2
  • Spitale, G., N. Biller-Andorno, and F. Germani. 2023. “AI Model GPT-3 (Dis) Informs us Better than Humans.” arXiv preprint arXiv:2301.11924.
  • Srinivasan, A. 2021. The Right to Sex. UK: Bloomsbury Publishing.
  • Stracke, C.M. 2020. “Open Science and Radical Solutions for Diversity, Equity and Quality in Research: A Literature Review of Different Research Schools, Philosophies and Frameworks and Their Potential Impact on Science and Education.”. In 17–37. Singapore: Springer. doi:10.1007/978-981-15-4276-3_2.
  • Swisher, K. 2023. On With Kara Swisher. In Sam Altman on What Makes Him ‘Super Nervous’ About AI The OpenAI Co-founder Thinks Tools like GPT-4 Will Be Revolutionary. But He’s Wary of Downsides. https://nymag.com/intelligencer/2023/03/on-with-kara-swisher-sam-altman-on-the-ai-revolution.html
  • Thornton, B., F.X. Gibbons, and M. Gerrard. 2002. “Risk Perception and Prototype Perception: Independent Processes Predicting Risk Behavior.” Personality and Social Psychology Bulletin 28 (7): 986–999. doi:10.1177/014616720202800711.
  • van Eeuwen, M. 2017. “Mobile Conversational Commerce: Messenger Chatbots as the Next Interface between Businesses and Consumers.” University of Twente. Accessed March, 2023. http://purl.utwente.nl/essays/71706
  • van Empelen, P., and G. Kok. 2006. “Condom Use in Steady and Casual Sexual Relationships: Planning, Preparation and Willingness to Take Risks among Adolescents.” Psychology & Health 21 (2): 165–181. doi:10.1080/14768320500229898.
  • Venkatesh, V., and F.D. Davis. 2000. “A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies.” Management Science 46 (2): 186–204. doi:10.1287/mnsc.46.2.186.11926.
  • Venkatesh, V., Morris, M.G., Davis, G.B., & Davis, F.D. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27(3): 425–478. doi:10.2307/30036540.
  • Walsh, T. 2018. 2062: The World That ai Made. Australia: La Trobe University Press.
  • White, K.M., B.E. Poulsen, and M.K. Hyde. 2017. “Identity and Personality Influences on Donating Money, Time, and Blood.” Nonprofit and Voluntary Sector Quarterly 46 (2): 372–394. doi:10.1177/0899764016654280.
  • Whittlestone, J., R. Nyrup, A. Anexandrova, K. Dihal, and S. Cave. 2019. “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research.” Accessed December, 2022. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf.
  • Wijnhoven, F., M. Ehrenhard, and J. Kuhn. 2015. “Open Government Objectives and Participation Motivations.” Government Information Quarterly 32 (1): 30–42. doi:10.1016/j.giq.2014.10.002.
  • Xiang, Yifan, Lanqin Zhao, Zhenzhen Liu, Xiaohang Wu, Jingjing Chen, Erping Long, Duoru Lin, Yi Zhu, Chuan Chen, Zhuoling Lin, and Haotian Lin. 2020. “Implementation of Artificial Intelligence in Medicine: Status Analysis and Development Suggestions.” Artificial Intelligence in Medicine 102: 101780. doi:10.2147/PPA.S225952.
  • Yang, K., Z. Zeng, H. Peng, and Y. Jiang. 2019. “Attitudes of Chinese Cancer Patients toward the Clinical Use of Artificial Intelligence.” Patient Preference and Adherence 13: 1867–1875. volume doi:10.2147/ppa.s225952.
  • Yigitcanlar, T., K. Degirmenci, and T. Inkinen. 2022. “Drivers behind the Public Perception of Artificial Intelligence: Insights from Major Australian Cities.” AI & Society: 1–21. doi:10.1007/s00146-022-01566-0.
  • Young, A. 2018. “About the Data Stewards Network.” Accessed March, 2023. https://medium.com/data-stewards-network/about-the-data-stewards-network-1cb9db0c0792

Appendix A

Table A1. Extended technology acceptance model items.

Appendix B

Table B1. Data donation items.

Appendix C

Table C1. Scenarios.

Appendix D

Table D1. Pearson correlation coefficients.