Abstract

Scholars are increasingly calling for a deeper understanding of cyberharassment (CH) with the goal of devising policies, procedures, and technologies to mitigate it. Accordingly, we conducted CH research that (1) integrated social learning theory (SLT) and self-control theory (SCT); (2) empirically studied this model with two contrasting samples, experienced cyberharassers and less experienced cyberharassers; and (3) conducted post hoc tests to tease out the differences between the two samples. We show that for less experienced cyberharassers, CH is largely a social-psychological-technological phenomenon; whereas, for experienced cyberharassers, CH is primarily a psychological-technological phenomenon. Our study makes a threefold contribution: (1) it shows the value of integrating two theories in a holistic and parsimonious manner to explain CH; (2) it shows that SCT alone is a more relevant framework for experienced cyberharassers, whereas a combination of SCT and SLT better explains less experienced cyberharassers; and (3) it reveals that the role of technology in fostering CH is crucial, regardless of the sample. The differential, yet consistent, findings demonstrate that addressing CH is contingent upon not only identifying theoretical approaches but also identifying the particular samples to which these theoretical approaches will be more suitable. Of several implications for practice, the most important may be that anonymity, asynchronicity, and lack of monitoring are the technology choices that foster CH, and thus these should be mitigated in designing social media and other communication technologies

Acknowledgement

The authors acknowledge support from the National Natural Science Foundation of China (Grant No. 71801205, 71921001, 71871095, 71601080, 71801100, and 71801217), and the Fundamental Research Funds for the Central Universities (Grant No. WK2040160028).

Supplemental Material

Supplemental data for this article can be accessed on the publisher's website

Notes

1. Following [Citation124], the three key foci for producing leading CH research are as follows. (1) Ground CH phenomena in a strong theoretical basis using new and insightful theoretical perspectives, including factoring in the role of technology, in CH. (2) Use powerful research methods to address CH phenomena, specifically from a sociotechnical angle. Given the rapid technological advances in cyberspace, there is not only a need to infuse CH research with sophisticated causal theory but also to “consider emerging methods and strategies that are relevant to new and emerging media, online behaviors, and the online spaces in which … people congregate” [110, pp. 197-198]. (3) Engage in causal modeling to unearth and determine the key human and technological factors associated with CH to mitigate it by developing powerful interventions.

2. Crucially, we use self-control theory, not social control theory (also known as social bond theory) or social cognitive theory. These are three distinct theories that use the same acronym but are not related.

3. Social influence refers to the tendency of individuals to rely on others’ actions to identify and model acceptable behaviors [Citation27]. In this context, positive social influence refers to social examples of socially appropriate behavior; negative social influence, the focus of this study, is the opposite.

4. SLT is a broad theory of learning originally proposed by [Citation12] to explain the critical role played by social context in general learning; it is related to Bandura’s social cognitive theory. Learning is a cognitive process that occurs in social contexts and thus involves not only direct instruction but also modeling and observation of others’ behaviors, including the costs and benefits thereof [Citation12]. For concision, the broader theory of SLT is not explained; rather, the focus is a contextualized subset created by Akers and Burgess along with other researchers [Citation3, Citation7, Citation23], which is particularly useful for explaining deviant and aggressive behaviors.

5. Lack of monitoring is developed mainly on the basis of [Citation112]’s concept of invisibility. The lack of monitoring refers to the degree to which individuals perceive their CH behaviors as physically visible to and monitored by others.

6. Again, these include: (1) immediate rather than delayed gratification; (2) relative ease, both mentally and physically; and (3) the perception that deviant acts are less subject to detection and resistance.

7. At the beginning of both Study 1 and Study 2, participants were given a list of 14 major CH behaviors compiled from [Citation80] and [Citation111]. Study 1 participants were asked if they had recently committed one or more of these acts. If so, they were “experienced cyberharassers” and allowed to continue; otherwise, they were excluded. In Study 2, to qualify as a “less-experienced cyberharasser” and to be allowed to continue, participants had to answer “never” or “rarely” to all questions about CH behaviors.

8. We required all respondents to be employed full-time. Respondents were fluent in English, were at least 18 years old, had at least five years of computer and Internet experience, and had reported recently committing at least one act of CH. To eliminate “professional” survey takers, participants who had participated in more than a handful of such surveys were blocked. Participants were dropped if they did not meet the screening criteria or if their surveys were incomplete. Due to the length and sensitive nature of the survey, and to decrease mono-method bias, the best practice of using attention-trap questions was followed to determine whether the respondents were reading all questions fully and answering honestly or were succumbing to social desirability bias [Citation70, Citation73]. Such participants were dropped before they could continue with the survey. Moreover, working with data providers, the pilot test determined the average amount of time it took participants to complete the survey. In the final study, any response that took one-third of the average time or less was marked for deletion, because the respondent was likely not paying full attention to all questions (100 percent of these had also failed the attention traps). Items were also randomized within the instrument so participants would be less apt to detect underlying constructs; measures with different scaling and anchors were used; reverse-coded items were used; and extensive warnings and instructions to participants to maintain their focus on the survey were provided. To further demonstrate that common-methods bias was not an issue, per [Citation101], an organizational commitment measure was gathered to use as a marker variable. As the analyses revealed, we conclude common-methods bias was not an issue. Additional measures were taken to reduce social desirability bias. First, strong assurances of anonymity were provided to participants. To ensure anonymity, the best practice of using truly anonymous research panels was followed by working with a third party [Citation11, Citation73, Citation91]. Panel data better guarantees anonymity, because the respondents never interact with the researcher and the researcher never has access to their contact information. This allowed for gathering respondents from a wide range of industries and positions, who would have been virtually impossible to reach otherwise, which strengthened the generalizability of the results.

9. We specifically required that the rating of at least one of the 21 CH behaviors was 3 or above out of 5.

10. Convergent and discriminant validities were assessed by STATA’s confirmatory factor analysis. Considering standards in the literature, such as CFI > 0.9, SRMR < 0.1, RMSEA < 0.08 [Citation24, Citation55, Citation62], TLI > 0.9 [Citation17], and CD > 0.45 [Citation16], the model fit was good (see ). Convergent validity was supported by large and standardized loadings for all constructs (p < .001) and t-values that exceeded statistical significance. Convergent validity was also supported by calculating the ratio of factor loadings to their respective standard errors that exceeded |10.0| (p < .001) [Citation43, Citation71]. Cronbach’s alpha and summary statistics for the constructs are shown in Table B.1 in Online Supplemental Appendix B. Discriminant validity was tested by showing that the measurement model had a significantly better model fit than a competing model with a single latent construct and was better than all other competing models in which pairs of latent constructs were joined. The χ2 differences between the competing models (omitted for brevity) were significantly larger than that of the original model, as also suggested by factor loadings, modification indices, and residuals [Citation78]. The correlation matrix and average variance extracted (Online Supplemental Appendix B, Table B.11) also strongly supported the claims of discriminant validity. In summary, these tests confirmed convergent and discriminant validity. Moreover, tests were conducted for common-methods bias, mediation, and moderation (see Online Supplemental Appendix B).

Additional information

Notes on contributors

Paul Benjamin Lowry

Paul Benjamin Lowry ([email protected]) is the Suzanne Parker Thornhill Chair Professor and Eminent Scholar in Business Information Technology at the Pamplin College of Business at Virginia Tech. He received his Ph.D. in Management Information Systems from the University of Arizona. His research interests include organizational and behavioral security and privacy; online deviance, online harassment, and computer ethics; human-computer interaction, social media, and gamification; and business analytics, decision sciences, innovation, and supply chains. Dr. Lowry has published over 120 journal articles in Journal of Management Information Systems (JMIS), MIS Quarterly, Information Systems Research, Journal of the AIS, and other journals. He is a member of the Editorial Board of JMIS, department editor at Decision Sciences Journal, and senior or associate editor of several other journals. He has also served multiple times as track co-chair at the International Conference on Information Systems, European Conference on Information Systems, and Pacific Asia Conference on Information Systems.

Jun Zhang

Jun Zhang ([email protected]; corresponding author) is an assistant professor in MIS at the International Institute of Finance, School of Management, University of Science and Technology of China. He holds a Ph.D. in Information Systems from City University of Hong Kong. His research centers on online deviant behaviors, information privacy and security, and IT-enabled health behavior change. His work has been published in such journals as Journal of Management Information Systems, Information Systems Research, and Computers in Human Behavior.

Gregory D. Moody

Gregory D. Moody ([email protected]) is a Lee Professor of Information Systems in the Lee Business School at the University of Nevada, Las Vegas and Director of the Graduate MIS program. He holds a Ph.D. from University of Pittsburgh and a Ph.D., from University of Oulu, Finland. His interests include IS security and privacy, e-business (electronic markets, trust) and human–computer interaction (Web site browsing, entertainment). Dr. Moody has published in Journal of Management Information Systems, MIS Quarterly, Information Systems Research, Journal of the AIS and other journals. He is an associate editor of Information Systems Journal and associate editor of AIS Transactions on Human-Computer Interaction.

Sutirtha Chatterjee

Sutirtha Chatterjee ([email protected]) is an associate professor at the University of Nevada, Las Vegas. His research interests are ethical issues in IS, IT-enabled innovation, mobile work, and e-commerce. His research has been published in such journals as Journal of Management Information Systems, MIS Quarterly, Journal of the AIS (JAIS), and others. Dr. Chatterjee serves as a senior editor of JAIS and an associate editor of Information Systems Journal. He has chaired/co-chaired tracks or mini-tracks at the Hawaii International Conference on System Sciences and other IS conferences.

Chuang Wang

Chuang Wang ([email protected]) is an associate professor at School of Business Administration, South China University of Technology. Her research focuses on the challenge and negative issues of information technology, and on social media, social networks, and mobile commerce. She has published in such journals Journal of Management Information Systems, Information Systems Research, Journal of the AIS, and others.

Tailai Wu

Tailai Wu ([email protected]) is a lecturer at the School of Medicine and Health Management in Tongji Medical College, Huazhong University of Science and Technology, China. His research interests are in medical informatics and human-computer interaction. His work has been published in several journals including Journal of Management Information Systems, Information Development, Journal of Medical Internet Research, and International Journal of Medical Informatics, among others, and appears in several preeminent IS conference proceedings.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.