ABSTRACT
The paradigm of developing mobile apps has shifted from native apps that store data on mobile devices to mobile cloud computing (MCC) apps that send data to the cloud. Transferring users’ data to the cloud provides several benefits, such as larger storage capacity and simultaneous access by multiple devices and users. However, storing data in the cloud also raises privacy concerns as users do not have direct control over their data. This study reports a privacy cost–benefit analysis including the moderating effects of dispositional traits (i.e. two personality meta-traits: stability and plasticity) and the behaviour-based trait (i.e. use experience) to understand information disclosure behaviour. The empirical study is based on a scenario-based survey (n = 807) from a diverse sample of MCC apps users. The results support the moderating effects of personality meta-traits; stability and plasticity differentially moderate the effects of perceived privacy risk and perceived value of data transfer to the cloud on information disclosure behaviour. Contradictory to prior research, prior use experience does not moderate the effects of the cost–benefit perceptions. The implications of research and practice are discussed.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 MCC apps include different categories, such as note-taking (e.g. Evernote), file storage (e.g. Google Drive), and backup (e.g. iCloud).
2 CloudResearch is a crowdsourcing platform that enables targeting specific users and follows several procedures to collect high quality data.
3 The process we used to improve the quality of data included the following steps. First, we checked IP addresses and geolocation information to remove any responses out of the USA, and thereby from the data to exclude any major cultural differences among respondents. We removed any response that could not provide the correct name of an MCC app correctly. We also removed responses completed in less than five minutes (based on the pilot study feedback). Moreover, we included some questions to check whether the respondent was paying attention and excluded any the responses that did not correctly answer these pay-attention questions to have responses with higher level of precision. Finally, we detected and removed outliers from our study as recommended by prior research (e.g. Chatterjee and Hadi Citation1986).
4 We first adopted Harmon’s one-factor test in expletory factor analysis to find whether one component explains a very high percent of the variance in the model (Podsakoff and Organ Citation1986). We found that the first factor could only explain 23 percent of the model variance. Then, we ran Lindell and Whitney’s (Citation2001) marker variable test by using the smallest observed correlation in our dataset. We found that the matrices of item-to-item correlations show correlations (ranging from −0.05 to 0.08) of the marker variable with the nine variables of the model. Only two out of nine correlations were significant. Thus, the results of Harmon’s test and the marker variable test indicate that common method variance is not a threat in this study.
5 For formative second-order constructs, we used a two-stage PLS approach to estimate the latent variables scores and then used the scores in regression analysis (Ye and Kankanhalli Citation2018; Liao et al., Citation2011).