12,933
Views
40
CrossRef citations to date
0
Altmetric
Empirical Research

Examining the intended and unintended consequences of organisational privacy safeguards

, , &
Pages 37-65 | Received 25 Apr 2014, Accepted 04 May 2016, Published online: 19 Dec 2017

Abstract

Research shows that despite organisational efforts to achieve privacy compliance, privacy breaches continue to rise. The extant studies on organisational privacy compliance concentrate on the extent to which privacy threats can be alleviated through a combination of technical and human controls and the positive (and often intended) influences of these controls. This focus inadvertently neglects unintended consequences such as impeded workflow in medical practices. To address this research conflict, this study uses an interpretive grounded theory research approach to investigate the consequences of privacy safeguard enactment in medical practices, including whether it influences their ability to meet privacy requirements and whether workflows are impeded. Our central contribution is a theoretical framework, the unintended consequences of privacy safeguard enactment (UCPSE) framework, which explicates the process by which privacy safeguards are evaluated and subsequently bypassed and the resulting influence on organisational compliance. The UCPSE highlights the importance of the imbalance challenge, which is the result of unintended consequences outweighing the intended consequences of privacy safeguard enactment. Failure to address the imbalance challenge leads to the adoption of workarounds that may ultimately harm the organisation’s privacy compliance. Despite several research calls, the consequences and effectiveness of organisational privacy efforts are largely missing from both information systems and health informatics research. This study is one of the first attempts to both systematically identify the impacts of privacy safeguard enactment and to examine its implications for privacy compliance in the healthcare domain. The findings also have practical implications for healthcare executives on the UCPSE and how they could alleviate the imbalance challenge to thwart workarounds and the subsequent negative effects on privacy compliance.

Introduction

Organisational governance of the information privacy of patient healthcare records has grown into a serious problem, and the U.S. requires healthcare providers to adhere to the strong information privacy protections outlined in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) (Choi et al, Citation2006; Wall et al, Citation2016). Despite the strong privacy information protections of HIPAA, over 134 million patients’ information have been breached since 2009 (HHS, Citation2016). These breaches endanger the confidentiality of patient records and result in dire organisational consequences, such as reputation damage, monetary penalties and civil and criminal liabilities (Wall et al, Citation2016). To protect patients’ information from further breaches and to adhere to HIPAA without facing severe sanctions, healthcare organisations face major challenges in designing and implementing appropriate safeguards to mitigate information privacy threats (Choi et al, Citation2006).

Information privacy refers to the degree to which individuals and organisations control when, how and to what extent information about them is released (Claerhout & DeMoor, Citation2005). Greenaway and Chan (Citation2005) explain that organisational information privacy refers to how firms treat their customers’ personally identifiable information. Our focus is on organisational-level privacy violations and protection. Notably, Wall et al (Citation2016) outline four kinds of organisational privacy contexts, depending on whether organisational privacy procedures are internally generated (usually because of ethical and professional considerations) or externally generated (usually because of legal requirements, such as HIPAA) and on whether they are focused on protecting consumer privacy (such as HIPAA) or employees.

Our privacy context is compliance to externally governed privacy regulations that protect healthcare consumers. The selection of the healthcare domain is not arbitrary. According to the privacy rights clearinghouse, a non-profit organisation that keeps up-to-date information about data breaches across all industries within the U.S, the healthcare industry has the highest number of breaches (PRC, Citation2016). This presents major challenges for healthcare organisations as they are trying to minimise these data breaches while not impeding the practice of medicine. The presence of increasing healthcare penalties for non-compliance and privacy operational demands (Croll, Citation2011), is adding further complexity. Accordingly, in our context, privacy compliance is defined on the organisation level (not the individual level) as the organisation complying with regulatory privacy rules to ensure the confidentiality of patients’ electronically protected health information (Wall et al, Citation2016). Previous researchers advocate for implementing privacy safeguards as part of organisational privacy management strategies. Privacy safeguards refer to technical protections, human defences, physical precautions and organisational processes to protect privacy (Parks et al, Citation2011b). However, these safeguards are only partially effective, because privacy breaches continue to occur (Culnan & Williams, Citation2009; Wall et al, Citation2016). Despite the acknowledged importance of privacy safeguard enactment on overall privacy compliance, the consequences of these enactments remain comparatively understudied (Belanger & Crossler, Citation2011; Smith et al, Citation2011). Notably, when referring to enacting privacy safeguards in this study, we focus on the act of implementing safeguards, not only decision to do so. This enactment, or implementation, can take many forms, including implementing the technical measures of encryption or passwords, adapting employment sanctions for looking up records that do not pertain to one’s line of work and enforcing physical security through the use of badges to restrict access to certain areas. Importantly, most privacy and security studies, whether consumer- or employee-centric, focus primarily on internal rather than external compliance to privacy protection measures. As Wall et al (Citation2016) explain, studies focusing on externally governed privacy and security rules are primarily exploratory and descriptive in nature, probably because organisation-centric data collection is difficult in this research area (Dhillon & Moores, Citation2001; Walczuch & Steeghs, Citation2001; Milne & Culnan, Citation2002; Nord & McCubbins, Citation2006; Crossler et al, Citation2013). Moreover, much research in this area uses surveys or hypothetical scenarios (e.g., D’Arcy et al, Citation2009; Siponen & Vance, Citation2010), both of which may be difficult to apply to an organisation-level analysis of compliance to external privacy regulations.

Moreover, to date, studies of organisational privacy compliance concentrate on the extent to which privacy threats can be alleviated through a combination of technical and human controls and the positive (and often intended) impacts of these privacy controls. However, this focus overlooks the unintended impacts of privacy safeguards, such as those that undermine medical processes or patient care. This is no minor oversight. Recent empirical studies raise concerns regarding the unintended consequences of privacy safeguards that impede employees’ needed access to information access and overall workflow (cf., Chen & Xu, Citation2013), as well as induce employees to engage in workarounds to bypass privacy procedures or features (cf., Murphy et al, Citation2014). Given the centrality of privacy compliance and the pivotal nature of engaging in proper processes in healthcare, it is imperative that privacy researchers in the healthcare domain identify both the intended and unintended consequences of enacting privacy safeguards and their subsequent effects on privacy compliance on an organisation level.

The current study makes several contributions. First, it introduces a theoretical framework that explicates the process by which the individual privacy safeguards are evaluated and their intended and unintended impacts on the organisation’s privacy compliance are examined. Second, in response to the call by Belanger and Crossler (Citation2011), we study the actual outcomes and implications of privacy safeguards in healthcare organisations. Specifically, we use a grounded theory approach to provide a rich lens to understand both intended and unintended consequences of privacy safeguard enactment (UCPSE) and their implications for privacy compliance in medical practice. Third, this research contributes to the recent call for interdisciplinary research (Smith et al, Citation2011) by converging the research streams of both information systems (IS) and health informatics. Fourth, because Smith et al (Citation2011) show a lack of organisation-level privacy research in the extant literature, noting studies at the organisation level ‘are necessarily more complex and less conducive to ‘quick’ data collection techniques such as written and online surveys’ (p. 1006), our research targets this under-researched level of analysis through a grounded theory approach, thus providing new theoretical insights on organisational privacy management.

Background

This study’s literature review was based on defining three criteria: domain, sources and search strategy (Cooper, Citation1998). The domain is interdisciplinary and includes IS and health informatics. The literature on health informatics was reviewed, as it is contextually relevant. Our sources include the top five outlets identified by Lyytinen et al (Citation2007) for the IS field and we use Le Rouge and De Leo (Citation2010) ranking to select our sources from the leading journals on health informatics. Importantly, our literature review focuses on two main themes that define our qualitative research: (1) privacy safeguards that are used in the healthcare industry and (2) the intended versus unintended consequences of enacting privacy safeguards in organisations.

Privacy safeguards in healthcare

There is a growing, yet still limited, body of research targeting organisational responses to privacy threats and issues (e.g., Greenaway & Chan, Citation2005; Culnan & Williams, Citation2009; Smith et al, Citation2011; Wall et al, Citation2016). One recent study suggests that organisations respond to the increasing list of privacy threats through a combination of technical and human controls, as well as processes (Parks et al, Citation2011a). In terms of technical safeguards, the health informatics literature is replete with research that proposes various technologies to address health information privacy threats, including the application of access control mechanisms to limit access to authorised users (Mohan & Yaacob, Citation2004; Blobel et al, Citation2006; Lovis et al, Citation2007; Chen et al, Citation2010), the use of anonymisation and pseudo-anonymisation to remove the identifiers from medical data (Quantin et al, Citation2000; Mohan & Yaacob, Citation2004; Aberdeen et al, Citation2010) and the adoption of encryption and cryptographic methods to make the data unreadable to anyone except those who hold the keys (Quantin et al, Citation2000).

One of the biggest challenges in implementing the aforementioned technical safeguards is to develop systems or technologies that do not impede the operational activities of healthcare providers (Lovis et al, Citation2007). In the fields of IS and health informatics, the extant research studies the procedures that govern healthcare delivery processes and ensure information privacy. However, it has been more than two decades since Smith (Citation1993) published findings based on a study of organisational privacy policies, which drew attention to such problems as a lack of policy and gaps between different policies and practices. Although organisations in the U.S. are more likely to enact privacy protection measures due to HIPAA compliance requirements (Peslak, Citation2006), the gap between organisationally desired privacy behaviours and actual privacy behaviours is still momentous (Croll, Citation2011; Warkentin et al, Citation2011; Wall et al, Citation2016).

In term of human safeguards, several studies investigate the impact of training and education on increasing compliance (Mohan & Yaacob, Citation2004; Yeh & Chang, Citation2007; D’Arcy et al, Citation2009; Fernando & Dawson, Citation2009). However, the positive influence of these safeguards is mixed, especially in the light of research on employee misbehaviour and general compliance problems (Bulgurcu et al, Citation2010; Siponen & Vance, Citation2010; Hu et al, Citation2011; Hsu et al, Citation2015; Lowry & Moody, Citation2015; Lowry et al, Citation2015).

The consequences of enacting privacy safeguards

Much of the literature on information privacy rests on the assumption that the enactment of privacy safeguards and mechanisms leads to better protected information and to better compliance (see Table ), both of which are intended consequences. However, the enactment of privacy safeguards can bring some unintended consequences as well. We posit in our study that there may often be an imbalance between these intended and unintended consequences.

Table 1 Mapping the consequences of enacting privacy safeguards

In the literature, unintended consequences refer to the unforeseen or unpredicted results of a specific action (Campbell et al, Citation2006). Organisational theorists study the impacts of unintended consequences in the domains of economic performance (Lal, Citation2001), project management (Brown, Citation2000) and organisational decision-making (Magasin & Gehlen, Citation1999). Recently, in the IS policy compliance literature, unintended consequences include negative employee reactions to policies, such as pushing back, doing the opposite of what is required, workarounds, malicious compliance and anger (Posey et al, Citation2011; Lowry & Moody, Citation2015; Lowry et al, Citation2015). This literature indicates that such unintended consequences can occur because of employees’ perceived threats to freedom, unfairness or privacy invasion.

The discussion of unintended consequences is relatively new in the privacy management literature. However, in the IS field, unintended consequences originated from the diffusion of innovations (DOI) theory (Rogers, Citation1998), which posits that the consequences of adopting innovations can be intended or unintended and desirable or undesirable. Although our research focuses on the unintended negative consequences of organisational privacy safeguards in the healthcare domain, prior studies on DOI identify various unintended desirable and beneficial consequences of innovation adoption (Ash et al, Citation2004). Campbell et al (Citation2006) point out that ‘unintended consequences are not uniformly errors or mistakes: they are simply surprises that can span a spectrum from lucky to unfortunate’ (p. 548).

In health informatics, many studies recognise the importance of the unintended consequences of adopting healthcare information technologies (HIT). For instance, both Ash et al (Citation2004) and Campbell et al (Citation2006) highlight the importance of the unexpected adverse consequences surrounding the implementation and maintenance of computerised provider order entry systems. Similarly, Harrison et al (Citation2007) analyse the emergent and recursive processes in HIT implementation and their unintended consequences. Collectively, these studies lead to a general consensus that introducing and implementing HIT is likely to incur a range of unanticipated adverse consequences, including higher clinician workloads, interrupted workflow, untoward changes in communications and practices, negative emotional responses, error generation and unexpected changes in power structures (Campbell et al, Citation2006).

Given the recognised importance of unintended adverse consequences in both IS and health informatics, it is somewhat surprising to find limited studies on the unintended adverse consequences of privacy safeguards. One of the few exceptions is the study by Choi et al (Citation2006), which suggests that before the enactment of HIPAA, secure-lock doors remained open and passwords were shared to increase ease and efficiency. Work practices have changed after HIPAA to locking doors and limiting computer access to avoid regulatory incompliance and/or penalties. Another example of how implementing privacy safeguards triggers workflow disruptions is documented by Coiera and Clarke (Citation2004), who show that managing patients’ e-consent privacy preferences may impede clinicians’ workflows. Failure to address these workflows’ disruptions could potentially lead employees to embrace workarounds to bypass the features that make accomplishing their work difficult (Ash et al, Citation2004).

In summary, the privacy literature identifies various types of information privacy safeguards, that is, the mechanisms by which organisations respond to privacy threats and achieve compliance. Although unintended consequences are highlighted as an important issue within privacy management in the healthcare domain, little empirical research explicitly examines them in relation to privacy safeguards, or the struggles that organisation may face in balancing intended and unintended consequences. This research is designed to fill in the gap in the literature by exploring and describing how privacy safeguards result in both intended and unintended consequences and to examine their effects on organisational privacy compliance.

Research method

We adopt a qualitative research method in this study to answer our research question on the outcomes of enacting privacy safeguards. Specifically, we use a qualitative research approach based on grounded theory (Strauss & Corbin, Citation1998; Corbin & Strauss, Citation2008). Grounded theory aims to develop inductive theory from data through incremental and systematic progression in knowledge, deriving conceptual deduction and hypotheses (Urquhart et al, Citation2010). Furthermore, the grounded theory method is particularly appropriate for studies of dynamic environments (Glaser & Strauss, Citation1967), such as healthcare. It offers a rigorous approach that assists in understanding organisational privacy safeguards at the organisation level through testable theories tightly connected to data and context (Eisenhardt, Citation1989).

Data collection

After clearance by the institutional review board, informants were contacted to participate in this study. Informants from 21 U.S. hospitals as well as other healthcare organisations (consulting and healthcare research firms, the government and professional healthcare organisations) agreed to participate in this study, which is part of a larger project. Data were gathered through semi-structured interviews with 30 consenting informants who had expert knowledge in privacy practices and were holding key positions in hospitals, such as chief executive officer, chief information officer, chief privacy officer, chief medical information officer, information technology director and privacy officer. Appendix A summarises the informants’ titles and types of organisations. Among hospitals, the study distinguishes between small, medium and large hospitals based on the number of beds. The American Hospital Association classifies hospitals with 100 or fewer beds as small (AHA, Citation2016), 500 or more beds as large (Kaunitz et al, Citation1984) and between 101 and 499 beds as medium. Interviews lasted between 40 and 90 min and were carried out by the first author between Fall 2010 and Spring 2012. The interviews explored the types of safeguards being used by healthcare organisations to mitigate privacy threats and their impacts on healthcare activities and practices. Our interview questionnaire evolved during the process of conducting this research. When we started identifying themes, we developed probe questions to explain differences in the informants’ privacy conceptualisations. These probe questions became more specific as we advanced in our data analysis. Details about the interview items are provided in Appendixes B and C.

In grounded theory, sampling is driven by conceptual emergence and limited by theoretical saturation (Glaser & Strauss, Citation1967). Consequently, the selection of data sources is neither a random selection nor totally a priori. For example, we decided a priori that a combination of different hospital sizes was most appropriate for this study; however, pursuing a particular hospital size for further analysis depended on the emergent themes. Strauss and Corbin (Citation1998) note that researchers must be flexible to handle what arises during data collection and analysis. In this study, theoretical sampling is evident through the following statements:

  • Interviewing was initiated with informants from hospitals. However, after initial data analysis, this target was revisited to include other healthcare organisations and entities (e.g., the U.S. Department of Health and Human Services, healthcare professional associations, healthcare IT providers and healthcare privacy consultants).

  • The interview questions were also revisited after the analysis of the first interviews, to include more specific questions about the operational impacts of implementing privacy safeguards. This is consistent with Strauss and Corbin’s approach to theoretical sampling, where the researcher ‘adjusts the interviews and observations on the basis of emergent and relevant concepts’ (Citation1998, p. 207).

Although we had difficulty obtaining informants because of the critical sensitivity of privacy and security topics (Crossler et al, Citation2013), as well as the scheduling challenges of healthcare executives, we managed to secure interviews with 30 informants. Our success was due to our involvement and membership with the Healthcare Information and Management Systems Society (HIMSS), the Penn State Center for Integrated Healthcare Delivery Systems and finally to using the snowball technique (Lincoln & Guba, Citation1985) where we were able to identify the next informant based on the last informant’s connections. We conducted data collection and analysis until the point of saturation, when we reached redundancy in the data and found no new concepts (Corbin & Strauss, Citation2008).

Data analysis

In this section, we provide an overview of the steps undertaken using the grounded theory approach, as depicted in Figure .

Figure 1 Grounded theory analysis process.

Figure 1 Grounded theory analysis process.

All interviews were audio recorded and transcribed verbatim. The transcribed interviews were imported into a computer on which the qualitative data analysis software tool NVivo (v.9) was installed. Notably, NVivo does not automatically code transcribed interviews but is used to organise the different codes and categories that are identified during the first- and second-order analyses. NVivo supports different stages of analysis, including defining concepts within themes, called nodes, and providing data analysis capabilities for searching, grouping and relating these nodes. We coded the interviews in several steps. First, we used open coding techniques to inductively identify preliminary categories, with no a priori coding or categorisation. The next step used was axial coding, which helped to develop the categories into themes (Corbin & Strauss, Citation2008). Finally, we implemented selective coding, where we integrated the categories into a coherent theoretical framework. During the process of data collection and analysis, we reviewed the literature from both the IS and health informatics communities to identify the potential contributions of our findings to the privacy literature in the healthcare domain.

During the coding process, every piece of data was contrasted against other pieces through constant comparison to ensure that the appropriate codes were applied to informants’ views. Having embraced the constant comparative method, we continued looking for information until the categories were saturated and no additional data were found. The constant comparative method has been a key concept in the development and understanding of grounded theory (Glaser & Strauss, Citation1967). According to Glaser and Strauss (Citation1967, pp. 113–114), the constant comparative method facilitates the generation of complex “theories of process, sequence, and change pertaining to organisations, positions, and social interaction [that] correspond closely to the data since the constant comparison force the analyst to consider much diversity in the data”. Constant comparisons determine the relevance to the emergent theory or the assumptions made by the emergent theory.

Evaluative and trustworthiness criteria

The evaluation of every research poses the question of the appropriate criteria to be used for making judgments. Positivist researchers employ the criteria of internal validity, external validity, reliability and objectivity. These criteria are not appropriate for interpretive studies. We used two approaches for judging interpretive research: (1) ensuring trustworthiness (Lincoln & Guba, Citation1985) and (2) ensuring the adequacy of the research process and its empirical grounding (Strauss & Corbin, Citation1998; Corbin & Strauss, Citation2008). These approaches are explained next and applied to the theoretical framework (See Appendix F and G).

Ensuring trustworthiness

The aim of trustworthiness is to support the assumption that a study’s findings are ‘worth paying attention to’ (Lincoln & Guba, Citation1985, p. 290). Lincoln & Guba (Citation1985) offer a set of four trustworthiness criteria appropriate for interpretive research that are analogous to positivist research: credibility, transferability, dependability and confirmability. To address credibility, we used multiple methods and sources to ensure triangulation of the findings, such as single interviews, group interviews and data collection from different sources (e.g., hospitals and government administrators, consultants and IT designers). Triangulation was also achieved by using supplemental information gathered during privacy and/or healthcare workshops and roundtables attended by the authors, along with the associated documentation. There were a total of four events: The first event was a privacy and security workshop in Florida where the attendees included healthcare consultants, faculty and healthcare government representatives. The second event also took place in Florida in the form of a privacy roundtable. The attendees included chief privacy officers, chief information officers, lawyers and consultants. The third event was a healthcare workshop that took place in Pennsylvania with faculty, EHR developers and hospital administrators. The fourth event was a healthcare workshop in the District of Columbia where the attendees were faculty and healthcare government representatives. Moreover, the first author has several years of industry experience in healthcare IT, in addition to being an active member of a healthcare research centre (Penn State CHIDS) and a National HIMSS. To ensure transferability, we provide a detailed first-order analysis of the phenomenon and context, which should provide enough background for readers to judge the plausibility of the findings and their applicability beyond the scope of this project (Van Maanen & Schein, Citation1979). We conducted an inquiry audit rather an inter-rater reliability, because interpretive research assumes that each researcher will have a unique interpretation of the findings (Lincoln & Guba, Citation1985). An inquiry audit was performed by one professor of organisational behaviour and one senior graduate student (trained in qualitative research) to examine and assess the process of inquiry and review the interview transcripts, coding sheets and data analysis. Finally, to measure how the findings were supported by the data collected; we shared the study with three professors, two graduate students and two healthcare professionals to solicit critical feedback. The consensus suggests that this research analysis and theoretical model accurately reflect the data.

Ensuring the adequacy of the research process and the empirical grounding

Corbin and Strauss (Citation2008) identify several criteria for evaluating an interpretive research. The empirical grounding includes evaluating eight criteria including assessing if the concepts used in the research are grounded in the data, if there is linkage between concepts, insights and if the theoretical framework is able to withstand future testing and research, every criterion is evaluated and documented in Appendix F. The adequacy of the research process includes seven criteria pertaining to the selection of the original sample, the categories that emerged, how did theoretical formulations guide some of the data collections, and how and why was the core category selected? All seven criteria and their evaluation are documented in detail in Appendix G.

Findings

Using a grounded theory approach, we present the concepts and categories that emerged from the iterative process of data gathering and analysis (Figure ). The findings are presented by interweaving the first-order codes along with the second-order themes to provide an overarching structure through a thick description of the data (Kreiner et al, Citation2006).

Figure 2 Emergent concepts, themes and dimensions.

Figure 2 Emergent concepts, themes and dimensions.

Intended desirable consequences

We found that the intended desirable consequences include controlled access, the deterrence effect and tracking mechanisms. These categories and representative data for each category are presented in Appendix E.

Controlled access

The creation of technical privacy safeguards, such as role-based access control (RBAC) mechanisms, allowed for better control over who could access the system. Hence, filtering out users who have no business looking at patient information was a positive consequence of privacy safeguards—each employee is granted access based on their need to know, which is defined by their role within the organisation. As one healthcare executive commented:

We have a role-based access, and that’s very important, because you don’t want to give employees any more access than what they need… So, basically, what we do is we look at the information system and the duties of the employee and we base their access on that.

Consequently, hospitals that enact privacy safeguards such as access control within their systems have better outcomes regarding who is accessing the data and for what purpose.

Deterrence effect

Informants emphasised the ability of a deterrence approach to create an environment of fear when rules are not followed. This fear was perceived as a positive impact, because it sets an example and deters other employees from inappropriately handling patient information. A chief privacy officer of a large hospital used an analogy to refer to how his organisation benefits from a deterrence approach:

It is sort of user grisly analogy. Back in mediaeval England when they chopped people’s heads off, they would put [that] head on a pike, and they stick it on the London Bridge, and the idea was that it would allow you to see who had their head chopped off. It was a very public hanging. And so, it’s the same thing here, we can’t necessarily say who we fire, but you hope the word gets out, you hope the employee that gets fired almost says, I can’t believe they fired me for looking at that. Well okay fine, I want you to tell your co-workers, because I want your co-workers to say, I am not going to do this again because I don’t want to have the same thing happen to me, or I don’t want to be suspended.

Instating fear and punishment for not protecting organisational data is perceived as an important and desirable consequence of enacting privacy safeguards, as it makes a clear statement of the expectations and consequences, which prevents misjudgements. A similar positive influence of fear was recently shown empirically in a study on motivating users to protect organisational security assets (Boss et al, Citation2015).

Tracking mechanisms

In addition to controlled access, healthcare executives believed that the ability to track who accesses specific records and when they were accessed was a major positive outcome of privacy safeguard enactment. One chief medical information officer stated:

One of the nice things about EHRs [electronic health records] is when somebody signs in, you know who signed in and what time where they are at [a record]. . . We have tracking mechanisms to be able to determine if I log into a chart and I go look at a nurse I work with it. Well, the system really knows who I am looking at here. So, if I am taking care of her [the nurse] as a patient in the emergency department. That would be clinically appropriate. If I have never seen her as a physician – patient relationship and I am looking at her chart, well, that is completely inappropriate.

This ability to perform audits and investigate who accessed the system is viewed as an intended and desirable consequence, deterring access that is not authorised.

Expected adverse consequences

Healthcare executives also expected some negative outcomes of privacy efforts. These outcomes resulted in user resistance, data disclosures and secondary use. Appendix E presents these categories and representative data for each category.

User resistance

The creation of new privacy safeguards created some resistance and complaints from users. Integrating privacy and security mechanisms in healthcare systems, which restrict clinicians’ access to patient information, is challenging. As stated by one chief privacy officer:

The dilemma is to give the clinicians the most important information that they can with the least amount of resistance.

This resistance emerged in instances in which instating passwords that are too complex and/or need to be changed frequently, thus making it hard for users to remember and adhere to the requirements. This challenge was described by one chief of information security officer:

If the security is too hard, people wouldn’t do it. If it is beyond their work flow, they won’t do it…I tell people all the time that security flies in the face of convenience; that’s just the way it always is… so, a lot of push-back or complaints.

Data disclosures and secondary use: Informants emphasised their concern over patient information when it is not used for direct treatment. This concern was perceived as a negative impact because the hospital no longer has direct control over the data when shared with other entities for purposes of payment, coding, medical research or other. One a healthcare executive commented:

How do you secure information that’s being used for secondary purposes for research, for research, for quality, for billing and [for] coding?

Another healthcare executive stated:

There are many times when a clerical person will hit the wrong number on the fax machine and fax a lab result for a patient to a Sheetz store [convenience store]… this happens all the time.

Unintended desirable consequences

Healthcare executives reported several positive outcomes of privacy efforts that were not initially intended. These privacy efforts yielded benefits such as standardisation of work processes and better accountability. Appendix E presents these categories and the representative data for each category.

Standardisation of work processes

The creation of privacy policies prompted organisations to be more aware of their physical operational surroundings and the visual setup of computers. For instance, in certain departments where patients are present, extra initiatives were taken to protect patient information, such as monitor screen protectors to make devices impossible to read at angles other than that of the user. One informant stated:

We must make sure that it is [patient information] handled appropriately … provide basic stuff of making sure people close curtains, close doors, they position their computers and equipment so that people can’t read or see the information.

Another desirable and unintended consequence regarding work processes was related to the use of online social media (ONS). Employees sometimes make ONS posts about patients or about the hospital not thinking about potential privacy breaches. These situations are motivating hospitals to have procedures in place to standardise online posts. One privacy officer explained:

Here’ is how we get in trouble on Facebook and how a lot of guys get in trouble. I think a lot of the marketing teams like using Facebook for marketing. So, people get in there and say some good things without asking, but then you could also say bad things about a manager or certain things going on. That’s when our privacy comes in … you want to say something bad about [our hospital], what do you do then? There has to be some procedures and protocols around that.

Accountability

Healthcare executives also reported individual and organisational behaviours that showed better accountability in protecting patient information. Best practices towards patient data even if the law/regulation is not established and voluntary compliance were among the behaviours described. Accountability has long been a powerful deterrent against undesired behaviours and refers to ‘the implicit or explicit pressure to justify one’s beliefs and actions to others’ (Lerner & Tetlock, Citation1999; Tadmor & Tetlock, Citation2009, p. 8). Recent IS policy compliance research shows the powerful, positive organisation effects of fostering accountability in employees through accountability theory (Vance et al, Citation2013, Citation2015). Vance et al (Citation2015) explain how accountability works to increase organisational policy compliance, as follows:

Accountability theory explains how the perceived need to justify one’s behaviours to another party causes one to consider and feel accountable for the process by which decisions and judgments have been reached. In turn, this perceived need to account for a decision-making process and outcome increases the likelihood that one will think deeply and systematically about one’s procedural behaviours (p. 347).

Focusing on doing the right thing through best practices makes the environment much more accountable at protecting patient information rather than only focusing on meeting the regulatory requirements. In this regard, one chief privacy officer stated:

I do think that there are a number of motivating factors. One of them is the right thing to do and that is the right attitude, and that would get you a lot further than going off checking off check boxes and say we did this, we did this, we are done, because you are never really done, even if you followed all the rules. You have those who do a lot of compliance very voluntarily and that is why I am only half joking when I said that those laws are meant to address those who are not willing to bring themselves into high standards and compliance; that’s why we have them. Think about it, most people who work in a bank don’t embezzle money, but you have rules against that because you have few bad actors, and that is the same here.

You just try to set that base line high enough to make sure that the information is secure. So, that is the purpose of the regulation and some of the certification requirements it to make sure that you have a fairly high baseline to make sure that the information is adequately protected, but we would encourage people to go beyond that and I don’t know how you dictate attitude. And if you are not going to do it, I think the party line is if you are not going to do it because you have the right attitude, then we will make you do it through regulation and enforcement.

Several healthcare executives noted their voluntary compliance beyond compliance with regulatory mandates:

It is not like we did not care about information before [HIPAA] and now all of the sudden the legislation makes you compliant with this. That is a poor assumption! We clearly value patient health information security at the highest level and have had for years …. For example, when we were building the EHRs eight years ago, before HITECH and even before HIPAA, all those considerations of who need to see what information, where is it secured, and where the displays for the screens are, were all inherent to what we are doing. Also, the policies and procedure associated with that. We actually have people who are hired to oversee those things in the institutions—patient privacy officers—so there are people who live and breathe this every day.

When the HIPAA privacy rule came out, because it was a mandate, organisations did comply because they knew they needed to. However, I don’t think that’s the only thing, because most organisations that I’ve come in contact with either, because I’ve worked there or I know their privacy officers, they do compliance because it’s a good thing to do.

We like to see particularly security as being an essential element of an electronic system, and other people see it and add on. That is not the right perspective to have. When you have that perspective, then it flows to the bottom of piles of the things that you are doing. So, yes, now we have this wonderful electronic system, but we don’t have the money to put the adequate security on it. While in reality, you have to. It is part of the price of doing business.

Unintended adverse consequences

Although it is natural to make the inference that creating technical and human privacy safeguards leads to better organisational compliance and fewer privacy breaches, our analysis shows some unexpected results.

Several executives reported unintended and undesirable impacts of privacy safeguard enactment. This includes information unavailability, workflow disruptions, usability issues and operational feasibility issues. Appendix D summarises the categories of unintended consequences and the representative data for each category.

Information unavailability

Notably, a question about the influence of privacy safeguards on the availability of information was not explicitly asked during the interviews. Rather, the informants themselves introduced this challenge while explaining the impacts of privacy safeguard enactment. This challenge was described by one chief information security officer as having two directives:

We don’t want to keep information out of the hands of people who need it. So, if we develop something that is too stringent … they can’t do their jobs the right way.

Several healthcare leaders discussed the ways in which implementing privacy safeguards influenced the availability or accessibility of patient information. Lacking access to the information needed to perform his or her job is a big hurdle for any healthcare professional. For example, doctors need to see a patient’s medications list or their lab tests but may not need to see a progress note on a psychiatric condition or a psychotherapy note. The desire to balance the implementation of privacy programs and healthcare delivery appeared to have created serious issues for clinicians trying to provide care for their patients, which opened up doors for potential unauthorised access and impacted privacy compliance. As noted by one of the healthcare executives:

The biggest challenge with respect to privacy and healthcare, in my mind, is this notion that you have to err on the side of providing additional information access. You can’t afford to put a barrier in front of a physician or a clinician when they need to have access to the information. So, you have to sometimes err on providing broader access than you might think you need, because you don’t necessarily know what you need about those people who need to have access to. That does raise challenges, because that then allows those individuals [to access] information that they don’t need to see.

Healthcare professionals, such as doctors and nurses, are increasingly dependent on the availability and accuracy of patient information to provide adequate treatment and make other healthcare-related decisions. Information availability is very important in healthcare, where patient information is often needed on a continual basis. Our results highlight the dilemma of ensuring availability and access to patient information for authorised healthcare providers without breaching the confidentiality of medical information. If the information needed by healthcare professionals to reach critical clinical decisions were unavailable due to tight access controls, patients may be incorrectly treated. Therefore, the unavailability of information may have dire consequences for the quality of patient care.

Workflow disruptions

As part of the interview protocol (Appendix A), the first author explored the effects of privacy and security safeguards on healthcare workflows. Comments about workflow disruption issues came up during the semi-structured interviews. The data analysis shows that these workflow disruptions were reflected through conflicts and push-back from employees as noted by one informant:

Do you want me not to administer that medication because everything didn’t line up in the security behind the scenes?

The implementation of certain privacy technologies resulted in conflicts and push-back. For example, timeout features are supposed to log off employees whose sessions are inactive to prevent unauthorised access by other employees. Although, hypothetically, this feature is an effective privacy initiative, it is not always positively received by certain healthcare professionals, especially doctors in emergency departments. One privacy leader stated:

Once I log in, I don’t want the system to log me out automatically. I don’t like it and timeout features. There’s timeout in all our systems. This is something we have to work around.

Healthcare organisations tend to consider these impacts to avoid push-back and workarounds:

We try to take that into account, the workflow issues, when you are looking at a policy because there is no sense in establishing a policy that people will not adhere to.

Although mitigation tools were implemented to bring the hospitals into compliance, in some cases, they ended up negatively impacting the hospitals’ adherence to regulations. In cases of workflow disruptions issues, employees found ways around these mitigation tools to accomplish their duties. This disruption was illustrated by the nurses’ workflows that one of the study informants shared:

Forty percent of the work that a nurse does is to administer medication; 40% of her day, she’s looking for pills and administering them. . . she is logging in and waiting, waiting, waiting and waiting. That is a problem; she is not going to get her job done. It’s hard enough to do the charting, administering medicine without the waiting, waiting and waiting. So, what most hospitals do is they have these computers-on-wheels, and they wheel [such a computer] into the patient’s room, and they leave it logged on, and they administer the medicine, and they wheel it out, and they leave it logged on, and then they go into the next room and then they leave it logged on. But when they go back to the medicine room, it’s logged on, and that’s a security risk.

Our results suggest that, in the pursuit of privacy compliance, organisations implement processes that may change their employees’ operational workflows. These changes may involve encrypting network transmission, pulling staff out for training or instating timeout features. As a result, users may not always positively react to implemented changes, especially when these changes disrupt their work routines.

Usability issues

Usability has long been defined as the degree of efficiency, ease and effectiveness of use, and this concept has been applied to a broad range of users, tasks, tools and environments (Kushniruk, Citation2002; Kushniruk & Patel, Citation2004; Lowry et al, Citation2006, Citation2013). Usability is particularly crucial to medical systems (Kushniruk, Citation2002; Kushniruk & Patel, Citation2004). With the design and implementation of privacy protective technologies, usability has become an extremely important, albeit poorly understood, element of privacy. The end results of poor usability are user dissatisfaction and unusable systems (Johnson et al, Citation2005). In the healthcare industry, understanding the interplay between usability and privacy is essential because various privacy-enhancing technologies have been introduced to control access to medical facilities and protect the confidentiality of patient information.

The usability challenges we found during data analysis include current applications or systems of EHRs. The challenges arose from dealing with inherent difficulties associated with using certain applications. Over the course of this study, healthcare executives explained that they had to take into consideration the usability of the privacy safeguards they had implemented:

It comes from EHRs’ usability and access to information. I mean, in certain scenarios, I would like to walk in with a purely clinician’s hat on. I like to walk into a room and see the patient’s information, talk with that patient and provide the care. But, somehow, I have to be acknowledged as being allowed to see that information. So, that is one of the conflicts. I have to log in or else I have to use an RFID tag or swipe something to get into that record.

If a new privacy or security feature is hard to use or difficult to navigate, users will abandon it, as was clearly stated by an informant:

If it is not usable to them, they won’t use it. And the things that are very usable to them, they are used to them, they can; I’ve seen this all the time.

Therefore, not addressing usability issues causes employees not to use privacy protocols or to find ways around them to accomplish their tasks, which could negatively impact the organisational privacy compliance.

Operational feasibility issues

Many of the informants commented on the operational feasibility of the privacy safeguards implemented in their hospitals, including resources, time and efficiency. As stated by one of the informants:

So, it does have an impact on resources and operation. You’re going to get to a point where people are going to have to have staff in place to just deal with that one situation, just to keep up with what they’re going to have to do to make sure they protect themselves. It’s got to be costing us money or it’s got to be costing us efficiency.

For example, implementing automated analytics that trigger alerts whenever a doctor accesses a patient’s record that has the same last name as the doctor’s, can involve so many people and processes that it could negatively influence the overall performance of healthcare delivery. Regarding healthcare regulations, hospitals face major operational issues due to how healthcare policies are crafted. The challenges that healthcare leaders face regarding operational feasibility are weighed against the patients’ best interests and, therefore, they impact privacy compliance. One privacy compliance officer stated:

‘We have lots of policies but we can’t meet the regulations in the strictest letter of the law and offer clinicians their ability to practice in an efficient cost effective manner’.

In summary, operational feasibility is an important factor for the deployment of new privacy safeguards in medical practices. Implementing privacy safeguards includes putting into place formal privacy education and training programs, as well as monitoring compliance through the use of technology and human processes. Our data showed unintended adverse consequences of the enactment of privacy safeguards on operational feasibility, resulting in performance degradation.

The imbalance challenge

Throughout this research project, healthcare leaders stated on numerous occasions that privacy threats do not end with the implementation of controls and safeguards. They also reported that both intended consequences and unintended consequences may occur during the enactment of privacy safeguards. Although intended consequences—both desirable and undesirable—are expected and accounted for, it is the emergence of unintended consequence, especially adverse consequences, which causes imbalance between maintaining patient privacy and not inhibiting workflows. The imbalance challenge is an analytical construct that was created to make sense of what organisations reported as the results of enacting information privacy safeguards. The imbalance challenge, which is the result of unintended consequences outweighing intended consequences of privacy safeguard enactment, outlines the organisations’ struggles in maintaining patient privacy without inhibiting business processes. Although no mathematical formula was presented or derived, the privacy leaders pointed specifically to adverse unintended consequences as the main culprit for this imbalance.

According to privacy leaders, an imbalance challenge occurs when the unintended consequences of enacting privacy safeguards outweigh the intended ones. A chief information officer stated:

Cost and safety pretty much drive a good portion of what we do and I am not talking about the cost of [technology] …. I am talking about the cost of not getting the results to the patient fast enough. I am talking about the cost of delivering the wrong medication dose … the technology piece of it is expensive, but it is not nearly as expensive as the down side of not doing it.

The same chief information officer continued:

My dilemma is stability versus intricacy like meeting the requirements. The requirements are such that I can’t always offer a stable system, but if I am a patient and I just had hip surgery and I am in excruciating pain and I need pain meds, they [patients] really don’t care that my seven application solution works or not, they just want the pain medication. So, there is the dilemma, stability offering enough flexibility and don’t let yourself get caught but giving out too much information.

One privacy officer illustrated this imbalance challenge by stating:

One of the challenges with my area is when we try to secure the information but yet our healthcare providers need quick access to it. So there’s always kind of a fine line there.

Privacy leaders uniformly emphasised the need for better understanding and handling of conflicting challenges. Hence, a thorough understanding of these factors and their influence on business practices is fundamental for explaining and addressing the imbalance challenge. The imbalance challenge is of pronounced concern to healthcare privacy leaders as illustrated by a chief privacy officer:

The federal law is toying with the idea of making sure that data at rest is encrypted. Not the movement of the data. In other words, if one of my hard drives would be encrypted and if somebody needs to get data unencrypted and pass it forward, that’s going to be almost impossible to put in place. This is because none of the environments, none of the vendors, have built their systems that way.

Discussion of the emergent theoretical framework

This section presents the emergent theoretical framework, the UCPSE framework, illustrating the intended and unintended consequences, the emergence of workarounds implications and the impacts on the organisation’s privacy compliance (Figure ). The model describes a process that includes the following categories: the enactment of privacy safeguards, intended (desirable and adverse) and unintended (desirable and adverse) consequences of implementing privacy safeguards, the imbalance challenge, workarounds and organisational privacy compliance. Close analysis of the data revealed interrelations among these categories and allowed for their integration into a theoretical framework (Strauss & Corbin, Citation1998). These major categories are found within both IS and health informatics communities, yet they are seldom interconnected in the literature. We present a theoretical framework that unifies these concepts and thereby contributes to the explanation of the consequences of privacy safeguard enactment, the causes of the workarounds and the impacts on organisational privacy compliance. This section also compares the categories and relationships of the theoretical framework with those from related literature (Eisenhardt, Citation1989). The interpretations suggest that the process diverges in two paths when unintended consequences outweigh intended consequences.

Figure 3 The unintended consequences of privacy safeguard enactment (UCPSE) Framework.

Figure 3 The unintended consequences of privacy safeguard enactment (UCPSE) Framework.

Enacting and evaluating privacy safeguards

Organisations enact privacy safeguards to mitigate information privacy threats and ensure legal compliance. However, much of the organisational research on privacy safeguards ignores a comprehensive view of reality: it tends to focus on intended consequences but overlooks unintended adverse consequences. Therefore, in this study, we propose an emergent theoretical framework that includes a more realistic range of intended and unintended consequences that can lead to the imbalance challenge. This framework requires that the following four quadrants of potential consequences of safeguards be considered: (1) intended consequences that are desirable, (2) expected consequences that are undesirable, (3) unintended consequence that are desirable and (4) unintended consequences that are undesirable. Figure , visually portrays these quadrants with actual examples from our data collection.

Figure 4 Intended and unintended consequences.

Figure 4 Intended and unintended consequences.

Evaluating whether an imbalance challenge exists

Once these consequences are explored, one can better assess whether an imbalance challenge is likely to happen by examining the unintended consequences and more specifically the undesirable consequences. These types of undesirable consequences tend to negatively shift the organisations’ desired balance. For example, we expect the unavailability of information needed to treat a patient to create an imbalance challenge between protecting patient information and treating patients. Similarly, disruptions in workflow, usability and operational issues impede healthcare delivery. Ideally, the organisation minimises the negative impacts and enhances the positive ones, because healthcare organisations seek to achieve both protection of patient information and regulatory compliance. In doing so, they implement privacy safeguards to minimise privacy breaches and abide by regulatory pressures.

Our data analysis revealed that the enactment of information privacy protective safeguards influences healthcare workflows through intended and unintended consequences, thus creating the imbalance challenge. Achieving a balance between privacy and utility by maximally reducing unintended negative impacts is challenging because of the dynamic environment of healthcare delivery. The dynamics inherent in medical practices, such as scheduled and unscheduled patient visits, clinicians’ unscheduled shifts, dynamic collaborations among clinicians and workforce needed at unexpected times and locations, often conflict with privacy role-based access safeguards (Boxwala et al, Citation2011; Chen & Xu, Citation2013) and, thus, make the imbalance challenge even more vital. The intended positive consequences of enacting privacy safeguards may function as a facilitator to privacy compliance, whereas the unintended adverse consequences may function as inhibitors.

When relating the imbalance challenge to the literature, we applied the lens of balance theory to seek an explanation for the contradictory intended and unintended impacts and the imbalance challenge. According to Heider (Citation1946) and Lewin (Citation1951), balance theory is a structural arrangement between social actors and affective ties. If these arrangements create an imbalance (tension or strain), actors will take actions to reduce this imbalance. For example, as a result of discomfort in a relationship, an actor may take a detachment action. However, contrary to the balance theory’s expectation of detachment, our data indicated that once healthcare leaders become aware of unintended impacts, they work on resolving those adverse impacts rather than distancing themselves from them. In situations in which privacy safeguards impede healthcare delivery, healthcare leaders increase their involvement rather than reduce it. For example, when needed information was not available to clinicians, leaders reviewed the policies and a ‘break-the-glass’ feature was created to allow clinicians to bypass the controls. Also, because of the penalties associated with privacy breaches, organisations could not afford to avoid taking actions. Hence, balance theory is not the right theoretical lens to fully understand the imbalance challenge.

Likewise, the opposing concepts of negative and positive impacts of the imbalance challenge led us to consider the privacy calculus theory in the privacy literature (e.g., Culnan & Bies, Citation2003). This theoretical framework has been applied at the individual level (e.g., Dinev & Hart, Citation2006; Xu et al, Citation2009; Keith et al, Citation2013) and provides insights that are worth taking into account at the organisation level. Privacy calculus considers two sets of opposing factors: inhibitors and facilitators to behavioural intentions, such as willingness to conduct online transactions or intentions to disclose information. An individual’s decision to transact online is based on the outcome of weighing both sets of factors: if the effects of the facilitating factors (e.g., perceived benefits, trust and control) are greater than those of inhibiting factors (e.g., privacy concerns and perceived risk), the individual is more likely to engage in a transaction. Although it is individuals who make the a priori decisions, the theoretical model of the imbalance challenge pertains to consequences of these decisions at the organisation level. Moreover, privacy calculus allows the individual to ‘calculate’ whether it is beneficial to engage in a transaction, whereas in the theoretical model of the imbalance challenge, organisations do not calculate but deal with the consequences in the form of the imbalance challenge. To reiterate, the imbalance challenge results from unintended negative impacts outweighing intended positive impacts. Hence, whereas the imbalance challenge does consider costs and benefits, it does not share the same lens or individual-level assumptions of privacy calculus. Hence, privacy calculus is not an accurate theoretical lens to fully understand the imbalance challenge.

Bypassing privacy safeguards: workarounds and reactance

Our findings suggest that the issues surrounding the organisational struggles to meet the ever-increasing privacy constraints and to comply with regulatory requirements have become a central concern to healthcare leaders. Again, the imbalance challenge emerged as the key concept regarding these struggles. An unattended imbalance challenge can potentially be harmful to the organisation’s privacy compliance because, in many cases, employees use this imbalance to justify workarounds and negative reactance to privacy safeguards. These workarounds range from legitimate to not legitimate and finally illegitimate user reactions. A striking example of legitimate use of workarounds was provided by a chief privacy officer who stated that clinicians sometimes bypass privacy safeguards to do their jobs, which involves saving lives. He emphasised that he would rather explain to the Office of Civil Rights why one of the hospital’s employees inappropriately accessed information (e.g., used someone else’s log-in credentials) rather than having to explain to a family member that he could not save their loved one because of privacy safeguards. The concern surrounding such actions is that the same access that saves lives could also hinder privacy compliance. An illegitimate example of workarounds was provided by the same chief privacy officer when he referred to the case of an Arizona congresswoman who was admitted to a hospital after being shot and how several employees lost their jobs for inappropriately looking up her medical records. Finally, less legitimate uses of workarounds included bypassing encryption or leaving computer unintended while logged on to save time.

This study provides evidence, with support from the literature, that when unintended adverse impacts outweigh intended ones, healthcare professionals may see a need to improvise or workaround their workflows. For example, information unavailability can be circumvented by users borrowing passwords or smart cards to access records they are not authorised to access (France, Citation1998). They may also ignore required encryption mechanisms because of their impact on job performance. The potential harm resides in subsequent use of patient information (e.g., copying and transmitting) under different users’ log-ins. Extant health informatics literature describes workarounds as clever alternative methods developed by users to accomplish what the system does not easily allow them to do (Ash et al, Citation2004). Similarly, Morath and Turnbull (Citation2005) define workarounds as ‘work patterns an individual or a group of individuals create to accomplish a crucial work goal within a system of dysfunctional work processes that prohibits the accomplishment of that goal or makes it difficult’ (p. 52). Workarounds are recognised in both IS and health informatics literature (Pollock, Citation2005); however, few studies seek to theoretically explain this concept (Halbesleben et al, Citation2008), especially with regard to information privacy.

Aside from workarounds, a dangerous unintended consequence organisations may face in a compliance context is negative employee reactance. Recently, in the IS policy compliance literature, this is addressed in terms of negative employee reactance to policies, including pushing back, doing the opposite of what is required, workarounds, malicious compliance and anger (Posey et al, Citation2011; Lowry & Moody, Citation2015; Lowry et al, Citation2015). This literature indicates that these unintended consequences occur because of employees’ perceived threats to freedom, perceived unfairness or perceived privacy invasion of the employee. An illustrative example was provided by a chief information officer who stated that nurses within his hospital exhibited reactance towards some of their computer privacy policies by doing the opposite of what is mandated. In fact, 40% of a nurse’s work is to administer medication in timely manner to different patients. However, having to log in and log out between patients was getting in the way of delivering care. As a reaction, several nurses were purposely leaving their computers logged on, simply wheeling the machine between rooms or while seeking medication for their patients.

Assessing the impacts on organisational privacy compliance

In using grounded theory, Urquhart et al (Citation2010) emphasise leveraging a systematic and iterative approach to theory conceptualisation. Embracing this approach enabled us to further analyse the unintended adverse consequences. We pursued a theoretical sampling in an attempt to increase our knowledge of the intended and UCPSE, their impacts on business practices and their implications for privacy compliance. Further analysis allowed us to distinguish between: (1) organisations where leaders were not aware of the unintended negative impacts and (2) organisations where leaders were aware of the unintended negative impacts and accounted for the imbalance challenge in how they responded to privacy threats. In fact, when asked how they measured the effectiveness of their safeguards, unaware leaders indicated that there were no formal metrics in place to assess the impacts and consequences, intended or unintended, of privacy safeguard enactment. Instead, they relied on the number of complaints or reported privacy breaches as an indication of the effectiveness of their privacy safeguards. Once these leaders became aware of the unintended adverse impacts, they considered revisiting their safeguards to account for the imbalance challenge. In these instances, awareness only happened when privacy compliance became an issue for the organisation. Clearly, this is a dangerous and suboptimal approach, because it relies on reactive assessment of negative privacy breaches. If the organisations’ privacy compliances were not in jeopardy, would the leaders have ever become aware of any unintended adverse impacts? Moreover, such an approach waits for catastrophic privacy breaches to further inform policy development.

Additional analysis revealed that leaders who were aware of the unintended adverse impacts performed some sort of privacy risk assessment a priori instead of waiting for breaches to occur. Such initiatives enabled leaders to look out for these impacts and sometimes to prevent or minimise them. Ultimately, a proactive assessment versus a reactive approach distinguishes these two types of organisations and further explains the imbalance challenge. Indeed, organisations that take a proactive approach try to develop privacy metrics (see Appendix H) to assess unintended adverse impacts and therefore limit workarounds and increase the level of their overall privacy compliance.

Research contributions and implications

To date, most studies on privacy focus on designing and implementing the appropriate safeguards to mitigate information privacy threats. The key limitation of this literature, however, is that it generally assumes only expected positive consequences and does not consider negative consequences or unintended consequences. Hence, this research lacks a realistic grounding in actual healthcare privacy practice. Accordingly, there have been recent calls for research to investigate the effectiveness and consequences of enacting privacy safeguards; as Belanger and Crossler (Citation2011) point out: ‘there are many behavioural questions to be explored with respect to not only use of potential privacy protection tools but also effectiveness and consequences of use’ (p. 1022). Similarly, this void in extant privacy literature is also identified in an interdisciplinary literature review by Smith et al (Citation2011), which highlights the need for more privacy research to consider actual outcomes. By using an interpretive approach, our study was able to capture and gain insights into how healthcare executives understand the process by which individual privacy safeguards are evaluated and subsequently bypassed and the resulting influences on the organisation’s privacy compliance. This led to the development of our emergent theoretical framework, the UCPSE framework.

Methodologically, using grounded theory provides a rich lens through which to understand the consequences of privacy safeguards and their implications for privacy compliance. We selected grounded theory because of the lack of extant theory to explain how organisations interpret the implications of privacy safeguards, the context of the healthcare domain provides practical relevance and suitability to study healthcare processes. This grounded theory study spanned over 16 months, during which we interviewed 30 privacy leaders from several healthcare organisations, including government agencies, uncovering subtle organisational dynamics that would not have emerged through quick data collection techniques such as online surveys. The ability to revisit the interview questions and the target population to include more pertinent questions and informants was crucial for reaching data saturation.

This study contributes to the need for interdisciplinary research by converging the research streams of both IS and health informatics, uncovering the challenges that healthcare organisations face and presenting an in-depth overview of privacy management within the healthcare domain. In doing so, we uncovered and defined the rich construct of the imbalance challenge. In contrast to previous research involving balance theory and privacy calculus, we make the case for the uniqueness of the imbalance challenge and why a new theoretical framework (the UCPSE) was needed to understand it. Contrary to the balance theory’s expectation of detachment, our data indicated that once healthcare leaders become aware of unintended impacts, they work to resolve those adverse impacts rather than distance themselves from them. In other words, in situations in which privacy safeguards were in the way of healthcare delivery, healthcare leaders increased their involvement rather than reduced it.

Likewise, the consumer theory of privacy calculus also does not fit well, because the imbalance challenge pertains to consequences of these decisions at the organisation level. Privacy calculus allows the individual to ‘calculate’ whether it is beneficial to engage in a transaction, whereas in the UCPSE framework, organisations do not calculate but deal with the consequences in the form of the imbalance challenge. The imbalance challenge results when the unintended negative impacts outweigh the intended positive impacts. Hence, whereas the imbalance challenge does consider costs and benefits, it does not do so from the same lens or individual-level assumptions seen in privacy calculus. Hence, privacy calculus is not an accurate theoretical lens to fully understand the imbalance challenge. We contribute to key theoretical products that are highly contextualised and unique to the healthcare privacy context, namely, a rich construct (the imbalance challenge) and a framework (the UCPSE framework). Hassan and Lowry (Citation2015) explain how original IS theory cannot emerge without richly contextualised constructs, because the core of theory is in its unique constructs. Interestingly, they also explain how grounded theory is an excellent starting place for concepts and constructs to emerge so that they lack superficiality. Meanwhile, they explain how frameworks are highly useful in setting out the territory for research, including its key constructs and their relationships.

Finally, none of these insights would be possible without conducting organisational-level data collection on privacy issues, which is rarely done. This research provides new theoretical insights into understanding privacy management by targeting the organisation level of analysis through a grounded theory approach. In the IS field, Smith et al (Citation2011) make an explicit call for research on studying information privacy at the organisation level (p. 1006):

Indeed, most rigorous studies of organisational privacy policies and practices would likely include a set of exhaustive interviews with an organisation’s members and stakeholders, and some amount of deep process tracing would also likely be involved. Such studies are the best approach to uncovering the somewhat subtle organisational dynamics that drive privacy policies and practices.

Thus, while collecting organisational-level data, we carefully evaluated it through Strauss and Corbin’s (Citation1998) and Corbin and Strauss’ (Citation2008) eight evaluative criteria for the empirical grounding (Appendix F) and seven criteria for judging a grounded theory research process (Appendix G).

Implications and suggestions for healthcare professionals

The dynamics between enacting privacy safeguards and ensuring privacy compliance constitute a major concern for healthcare executives. We investigate this relationship by introducing the imbalance challenge, which perspective encourages healthcare practitioners to evaluate intended and unintended consequences of privacy safeguards enactments. The imbalance challenge is the result of unintended consequences outweighing intended consequences of privacy safeguards enactments. Such imbalance may lead to workarounds that can be counterintuitive to the privacy safeguards in place.

To minimise the imbalance challenge, first we propose that practitioners embrace a proactive approach in their privacy safeguards enactment. Such approach should start by instating a privacy impact assessment tool (See Appendix H) supplemented with a column that tracks the impact and/or workarounds of each item on business practices. Such approach creates an environment of awareness and monitoring. Furthermore, healthcare practitioners should strive to develop a culture of continuous, rather than one-time privacy risk assessment that examines the cycle of threats, actions and impact. To facilitate this continuous assessment, we propose establishing triggers to alert administrators of common conditions that may indicate workarounds may be occurring. As illustration, when a nurse or physician is not logged in while present during his/her shift, an alert should be established for further investigation as the nurse/physician may be using someone else’s log-in credentials. Likewise, if a nurse or physician is logged in past his/her shift that would indicate the possibility of accidental open access to their account, which should also be investigated.

Second, our data analysis reveals different levels of workarounds from legitimate, less legitimate to illegitimate. We propose that practitioners embrace this distinction as workarounds to save lives are very distinct from workarounds to save time or workarounds that involve malicious data abuse. Hence, accounting for and/or revisiting privacy safeguards to create a ‘break the glass’ feature – when lives are involved. ‘Break the glass’ is an escape mechanism that allows certain users to escape the domain constraint (Lovis et al, Citation2007) and thereby bypass RBAC privacy mechanisms. Through ‘break the glass’, the physician would justify his or her need to access a particular patient’s information and he will be granted access.

Third, practitioners should re-evaluate their policies and privacy processes by involving all stakeholders including but not limited to physicians, nurses, administrators, insurance companies and business associates. For example when privacy policy or process is impacting 40% of a nurse work by having to log in and log off when she sees the patient, when she leaves to get medication and when she gets back to administer several times a day, he/she will embrace workarounds such as leaving her computer logged in. The more different stakeholders are involved in the practice of privacy, the less likelihood of negative consequences. This is also why mere privacy policies alone (unless developed with stakeholders and carefully monitored and enforced) can be highly superficial and virtually meaningless to an organisation. What is needed is real privacy advocacy and protection built as a process into an organisation, and led by senior management.

It is important for healthcare executives to understand that an effective way to minimise the imbalance challenge begins with efforts to minimise the unintended consequences. Through the above discussed implications, our study has created a preliminary benchmark by which healthcare executives can better account for unintended consequences and ultimately achieve privacy compliance.

Study limitations and future research opportunities

This study has several limitations that suggest future research opportunities. Regarding the validity of an emergent theory, it is worth referring to generalisability, which is ‘the validity of a theory in a setting different from the one where it was empirically tested and confirmed’ (Lee & Baskerville, Citation2003, p. 221). Lee & Baskerville (Citation2003) clarify that the appropriate type of generalisability (not only statistical) should be applied to this particular type of theory-development study. The purpose of our study was not to achieve statistical validation but to discover patterns for the purpose of theory building and to gain a better understanding of the main issues in this context. It is reasonable to assume that the insights from this emergent framework would guide future research to a more formal theory (Orlikowski, Citation1993). Hence, additional data collection and testing are needed to further clarify this study’s findings and to formalise its framework into theory. Likewise, more empirical and theoretical work is needed that more closely examines the relationships among privacy safeguard enactment, the imbalance challenge, workarounds and privacy compliance. In doing so, longitudinal data would be useful so that a better understanding of the underlying casual mechanisms can emerge.

Moreover, another limitation of this study is that the findings are based solely on the U.S. hospitals. There could be differences in other countries because of differences in legal systems and basic economic structures. Moreover, some IS researchers demonstrate that there are differences in information privacy issues across countries and cultures (Milberg et al, Citation2000; Bellman et al, Citation2004; Posey et al, Citation2010; Lowry et al, Citation2011). Hence, another research opportunity would be a comparative study of the factors impacting this imbalance, taking into consideration country and cultural influences.

Finally, several of the phenomena we observed can be further studied in terms of lower-order causal mechanisms and IT artefacts. As an example, we showed that the presence of accountability is a positive privacy facilitation factor in several organisations. However, no one has studied the lower-order components of accountability in this privacy context. Accountability theory proposes several mechanisms that increase accountability perceptions and subsequent organisational policy compliance (Vance et al, Citation2013, Citation2015): identifiability, expectation of evaluation, awareness of monitoring and social presence. Vance et al empirically validate these and tie them to IT artefact design in security compliance settings. Similar IT artefact, design-oriented research could be highly useful in a healthcare domain, especially because many medical systems have broad access so that many different medical practitioners can access patient information.

Conclusion

The recent literature suggests that most extant information privacy research focuses on the creation of privacy safeguards and neglects the actual consequences of enacting the practices in which organisations engage. An imbalance challenge occurs when the unintended consequences outweigh the intended consequences of privacy safeguard enactment. To address the imbalance challenge, organisations need to achieve a balance between privacy and utility, meeting privacy requirements without impeding workflows. Using an interpretive grounded theory research approach, this study investigates the consequences of privacy safeguards and their effects on meeting privacy requirements without impeding workflow in medical practices.

This research responds to a theoretical challenge that was overlooked in prior research, and it makes several contributions. First, it introduces a theoretical framework (the UCPSE) that explicates the process by which both the intended and unintended consequences of implementing privacy safeguards are evaluated and bypassed and the impacts on organisational privacy compliance. Second, this study was designed to gain an in-depth understanding of the actual outcomes and implications of privacy safeguards in healthcare organisations. Therefore, using a grounded theory methodology provides a rich lens to understand the actual consequences of privacy safeguards and their implications for privacy compliance. Recognising the state of imbalance where unintended consequences outweigh intended consequences constitutes a strong conceptual foundation for the impacts of enacting privacy safeguards. Third, this research contributes to the recent call for interdisciplinary research by converging the research streams of both IS and health informatics and presents an in-depth view of privacy management within the healthcare domain. Fourth, this study responds to the lack of organisation-level privacy research and provides new theoretical insights into understanding privacy management by targeting this under-researched level of analysis through a grounded theory approach.

Additional information

Notes on contributors

Rachida Parks

About the authors

Rachida Parks is an Assistant Professor of Computer Information Systems at Quinnipiac University where she teaches courses in Information Systems and Business Analytics. She earned a Ph.D. from the Pennsylvania State University and has over 10 years of experience in Information Technology within the Department of Defense, NBC Universal Studios, as well as her own consulting firm. Her research interests include Information Privacy and Security, Health Informatics, Business Analytics and Information Technology and Education. Her research has been published in several journals and conferences. She served as an Associate Editor twice for ICIS privacy and security track, program chair for AMCIS, and co-chair for IFIP Dewald Roode Information Security Workshop. She also serves extensively on diversity and minority programs.

Heng Xu

Heng Xu is an associate professor of Information Sciences and Technology at the Pennsylvania State University. She leads the Privacy Assurance Lab at Penn State. Her work has been published in premier outlets across various fields such as MIS, Law, Computer Science and Human Computer Interaction.

Chao-Hsien Chu

Chao-Hsien Chu is a professor of Information Sciences and Technology, and Management Science and Information Systems (Business School), and the founding director of the Center for Information Assurance at Penn State. His Ph.D. in Business Administration was from the Pennsylvania State University. His research interests include information assurance and security, RFID technologies integration, security and deployment, emerging information/intelligent technologies, and supply chain integration. His research was supported by the US National Science Foundation, Department of Defense, National Security Agency, Hewlett-Packard Laboratories, Cisco Systems and others. He has published more than 160 refereed articles in top-ranking journals and in major conference proceedings. Three of his papers received the best paper awards.

Paul Benjamin Lowry

Paul Benjamin Lowry is a Full Professor of Information Systems at the Faculty of Business and Economics, The University of Hong Kong. He received his Ph.D. in MIS from the University of Arizona. He has published over 170 articles with many appearing in outlets such as MISQ, ISR, JMIS, JAIS, IJHCS, JASIST, ISJ, EJIS, CACM, Information Sciences, DSS, IEEETSMC, IEEETPC, SGR, Expert Systems with Applications, JBE, Computers & Security, CAIS and others. He serves as co-editor-in-chief at AIS-Transactions on HCI; SE at Decision Sciences; and AE at Information Systems Journal, European Journal of IS, Information & Management, and Communications of the AIS. He has also served as an ICIS track co-chair. His research interests include behavioural security issues, HCI, e-commerce and scientometrics.

References

  • Aberdeen J Bayer S Yeniterzi R Wellner B Clark C The MITRE Identification Scrubber Toolkit: design, training, and assessment International Journal of Medical Informatics 2010 79 12 849 859 10.1016/j.ijmedinf.2010.09.007
  • AHA (2016) Section for small or rural hospitals.
  • Ash JS Berg M Coiera E Some unintended consequences of information technology in health care: The nature of patient care information system-related errors Journal of the American Medical Informatics Association 2004 11 2 104 112 10.1197/jamia.M1471
  • Belanger F Crossler RE Privacy in the digital age: a review of information privacy research in information systems MIS Quarterly 2011 36 4 1017 1041
  • Bellman S Johnson EJ Kobrin SJ Lohse GL International differences in information privacy concerns: a global survey of consumers The Information Society 2004 20 5 313 324 10.1080/01972240490507956
  • Blanquer I Hernández V Segrelles D Torres E Enhancing privacy and authorization control scalability in the grid through ontologies IEEE Transactions on Information Technology in Biomedicine 2009 13 1 16 24 10.1109/TITB.2008.2003369
  • Blobel B Nordberg R Davis JM Pharow P Modelling privilege management and access control International Journal of Medical Informatics 2006 75 8 597 623 10.1016/j.ijmedinf.2005.08.010
  • Boss SR Galletta DF Lowry PB Moody GD Polak P What do users have to fear? Using fear appeals to engender threats and fear that motivate protective behaviors in users MIS Quarterly 2015 39 4 837 864
  • Boxwala AA Kim J Grillo JM Ohno-Machado L Using statistical and machine learning to help institutions detect suspicious access to electronic health records Journal of the American Medical Informatics Association 2011 18 4 498 505 10.1136/amiajnl-2011-000217
  • Boyd AD Hosner C Hunscher DA Athey BD Clauw DJ An ‘Honest Broker’ mechanism to maintain privacy for patient care and academic medical research International journal of medical informatics 2007 76 5 407 411 10.1016/j.ijmedinf.2006.09.004
  • Brown KL Analyzing the role of the project consultant: cultural change implementation Project Management Journal 2000 31 3 52 55
  • Bulgurcu B Cavusoglu H Benbasat I Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness. [Article] MIS Quarterly 2010 34 3 523 A527
  • Campbell EM Sittig DF Ash JS Guappone KP Dykstra RH Types of unintended consequences related to computerized provider order entry Journal of the American Medical Informatics Association 2006 13 5 547 556 10.1197/jamia.M2042
  • Canim M Kantarcioglu M Malin B Secure management of biomedical data with cryptographic hardware IEEE Transactions on Information Technology in Biomedicine 2012 16 1 166 175 10.1109/TITB.2011.2171701
  • Chen K Chang YC Wang DW Aspect-oriented design and implementation of adaptable access control for electronic medical records International Journal of Medical Informatics 2010 79 3 181 203 10.1016/j.ijmedinf.2009.12.007
  • CHEN Y and XU H (2013) Privacy management in dynamic groups: understanding information privacy in medical practices. In 16th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW), pp 541–552, San Antonio, TX.
  • Choe J Yoo SK Web-based secure access from multiple patient repositories International Journal of Medical Informatics 2008 77 4 242 248 10.1016/j.ijmedinf.2007.06.001
  • Choi YB Capitan KE Krause JS Streeper MM Challenges associated with privacy in health care industry: implementation of HIPAA and the security rules Journal of Medical Systems 2006 30 1 57 64 10.1007/s10916-006-7405-0
  • Claerhout B Demoor G Privacy protection for clinical and genomic data: the use of privacy-enhancing techniques in medicine International Journal of Medical Informatics 2005 74 2 257 265 10.1016/j.ijmedinf.2004.03.008
  • Coiera E Clarke R E-consent: the design and implementation of consumer consent mechanisms in an electronic environment Journal of the American Medical Informatics Association 2004 11 2 129 140 10.1197/jamia.M1480
  • Cooper HM (1998) Synthesizing Research: A Guide for Literature Reviews. (Vol. 2). Sage, Thousand Oaks, CA.
  • Corbin JM Strauss AL Basics of Qualitative Research: Grounded Theory Procedures and Techniques 2008 3 Newbury Park Sage
  • Croll PR Determining the privacy policy deficiencies of health ICT applications through semi-formal modelling International Journal of Medical Informatics 2011 80 2 e32 e38 10.1016/j.ijmedinf.2010.10.006
  • Crossler RE Johnston AC Lowry PB Hu Q Warkentin M Future directions for behavioral information security research Computers & Security 2013 32 1 90 101 10.1016/j.cose.2012.09.010
  • Culnan MJ Bies RJ Consumer privacy: balancing economic and justice considerations Journal of Social Issues 2003 59 2 323 343 10.1111/1540-4560.00067
  • Culnan MJ Williams CC How ethics can enhance organizational privacy: lessons from the Choicepoint and TJX data breaches MIS Quarterly 2009 33 4 673 687
  • D’Arcy J Hovav A Galletta D User awareness of security countermeasures and its impact on information systems misuse: a deterrence approach Information Systems Research 2009 20 1 79 98 10.1287/isre.1070.0160
  • Dhillon G Moores TT Internet privacy: interpreting key issues Information Resources Management Journal 2001 14 4 33 37 10.4018/irmj.2001100104
  • Dinev T Hart P An extended privacy calculus model for e-commerce transactions. [Article] Information Systems Research 2006 17 1 61 80 10.1287/isre.1060.0080
  • Eisenhardt KM Building theories from case study research Academy of Management Review 1989 14 4 532 550
  • Fernando JI Dawson LL The health information system security threat lifecycle: an informatics theory International Journal of Medical Informatics 2009 78 12 815 826 10.1016/j.ijmedinf.2009.08.006
  • France FHR Ethics and biomedical information International Journal of Medical Informatics 1998 49 1 111 115 10.1016/S1386-5056(98)00018-5
  • Glaser BG Strauss AL The Discovery of Grounded Theory: Strategies for Qualitative Research 1967 New York Aldine de Gruyter
  • Greenaway KE Chan YE Theoretical explanations for firms’ information privacy behaviors Journal of the Association for Information Systems 2005 6 5 171 198
  • Gritzalis S Lambrinoudakis C Lekkas D Deftereos S Technical guidelines for enhancing privacy and data protection in modern electronic medical environments IEEE Transactions on Information Technology in Biomedicine 2005 9 3 413 423 10.1109/TITB.2005.847498
  • Haas S Wohlgemuth S Echizen I Sonehara N Müller G Aspects of privacy for electronic health records International Journal of Medical Informatics 2011 80 2 e26 e31 10.1016/j.ijmedinf.2010.10.001
  • Halbesleben JR Wakefield DS Wakefield BJ Work-arounds in health care settings: literature review and research agenda Health Care Management Review 2008 33 1 2 12 10.1097/01.HMR.0000304495.95522.ca
  • Harrison MI Koppel R Bar-Lev S Unintended consequences of information technologies in health care—an interactive sociotechnical analysis Journal of the American Medical Informatics Association 2007 14 5 542 549 10.1197/jamia.M2384
  • Hassan NR and Lowry PB (2015) Seeking middle-range theories in information systems research. In International Conference on Information Systems (ICIS 2015), AIS, Fort Worth, TX.
  • Heider F Attitudes and cognitive organization Journal of Psychology 1946 21 1 107 112 10.1080/00223980.1946.9917275
  • HHS (2016) Breaches affecting 500 or more individuals. March 12, 2016, https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf.
  • Hsu J, Shih S-P, Hung YW and Lowry PB (2015) How extra-role behaviors can improve information security policy effectiveness. Information Systems Research 26(2), 282–300.
  • Hu Q Xu Z Dinev T Ling H Does deterrence work in reducing information security policy abuse by employees? Communications of the ACM 2011 54 6 54 60 10.1145/1953122.1953142
  • Johnson CM Johnson TR Zhang J A user-centered framework for redesigning health care interfaces Journal of Biomedical Informatics 2005 38 1 75 87 10.1016/j.jbi.2004.11.005
  • Kantarcioglu M Jiang W Liu Y Malin B A cryptographic approach to securely share and query genomic sequences IEEE Transactions on Information Technology in Biomedicine 2008 12 5 606 617 10.1109/TITB.2007.908465
  • Kaunitz AM Grimes DA Hughes JM Smith JC Hogue JRC Maternal deaths in the United States by size of hospital Obstetrics & Gynecology 1984 64 3 311 314
  • Keith MJ Thompson SC Hale J Lowry PB Greer C Information disclosure on mobile devices: re-examining privacy calculus with actual user behavior International Journal of Human–Computer Studies 2013 71 12 1163 1173 10.1016/j.ijhcs.2013.08.016
  • Kreiner GE Hollensbe EC Sheep ML Where is the “me” among the “we”? Identity work and the search for optimal balance Academy of Management Journal 2006 49 5 1031 1057 10.5465/AMJ.2006.22798186
  • Kushniruk A Evaluation in the design of health information systems: application of approaches emerging from usability engineering Computers in Biology and Medicine 2002 32 3 141 149 10.1016/S0010-4825(02)00011-2
  • Kushniruk AW Patel VL Cognitive and usability engineering methods for the evaluation of clinical information systems Journal of Biomedical Informatics 2004 37 1 56 76 10.1016/j.jbi.2004.01.003
  • Lal D (2001) Unintended Consequences: The Impact of Factor Endowments, Culture, and Politics on Long-Run Economic Performance. (Vol. 7). MIT Press, Cambridge, MA.
  • Rouge CM Leo G Information systems and healthcare XXXV: health informatics forums for health information systems scholars Communications of the Association for Information Systems 2010 27 7 99 112
  • Lee AS Baskerville RL Generalizing generalizability in information systems research Information Systems Research 2003 14 3 221 243 10.1287/isre.14.3.221.16560
  • Lee WB Lee CD A cryptographic key management solution for HIPAA privacy/security regulations IEEE Transactions on Information Technology in Biomedicine 2008 12 1 34 41 10.1109/TITB.2007.906101
  • Lerner JS Tetlock PE Accounting for the effects of accountability Psychological Bulletin 1999 125 2 255 275 10.1037/0033-2909.125.2.255
  • Lewin K Field Theory in Social Science: Selected Theoretical Papers 1951 Oxford Harpers
  • Li M Carrell D Aberdeen J Hirschman L Malin BA De-identification of clinical narratives through writing complexity measures International Journal of Medical Informatics 2014 83 10 750 767 10.1016/j.ijmedinf.2014.07.002
  • Lincoln YS and Guba EG (1985) Naturalistic Inquiry. (Vol. 75). Sage, Newbury Park.
  • Lovis C Spahni S Cassoni N Geissbuhler A Comprehensive management of the access to the electronic patient record: towards trans-institutional networks International Journal of Medical Informatics 2007 76 5 466 470 10.1016/j.ijmedinf.2006.09.014
  • Lowry PB Cao J Everard A Privacy concerns versus desire for interpersonal awareness in driving the use of self-disclosure technologies: the case of instant messaging in two cultures Journal of Management Information Systems 2011 27 4 163 200 10.2753/MIS0742-1222270406
  • Lowry PB Gaskin J Twyman NW Hammer B Roberts TL Taking ‘fun and games’ seriously: proposing the hedonic-motivation system adoption model (HMSAM) Journal of the Association for Information Systems 2013 14 11 617 671
  • Lowry PB Moody GD Proposing the control-reactance compliance model (CRCM) to explain opposing motivations to comply with organizational information security policies Information Systems Journal 2015 25 5 433 463 10.1111/isj.12043
  • Lowry PB Posey C Bennett RJ Roberts TL Leveraging fairness and reactance theories to deter reactive computer abuse following enhanced organisational information security policies: an empirical study of the influence of counterfactual reasoning and organisational trust Information Systems Journal 2015 25 3 193 230 10.1111/isj.12063
  • Lowry PB, Spaulding T, Wells T, Moody GD, Moffit K, et al. (2006) A theoretical model and empirical results linking website interactivity and usability satisfaction. In 39th Annual Hawaii International Conference on System Sciences (HICSS 2006), pp 1–9, IEEE, Kaui, HI.
  • Lyytinen K Baskerville R Iivari J Te’Eni D Why the old world cannot publish? Overcoming challenges in publishing high-impact IS research European Journal of Information Systems 2007 16 4 317 326 10.1057/palgrave.ejis.3000695
  • Magasin M Gehlen FL Unwise decisions and unanticipated consequences Sloan Management Review 1999 1999 Fall 37 60
  • Milberg SJ Smith HJ Burke SJ Information privacy: corporate management and national regulation Organization Science 2000 11 1 35 57 10.1287/orsc.11.1.35.12567
  • Milne GR Culnan MJ Using the content of online privacy notices to inform public policy: a longitudinal analysis of the 1998–2001 U.S. web surveys Information Society 2002 18 5 345 359 10.1080/01972240290108168
  • Mohan J Yaacob RRR The Malaysian Telehealth Flagship Application: a national approach to health data protection and utilisation and consumer rights International Journal of Medical Informatics 2004 73 3 217 227 10.1016/j.ijmedinf.2003.11.023
  • Morath JM Turnbull JE To do no harm: ensuring patient safety in health care organizations 2005 San Francisco Jossey-Bass
  • Murphy A, Reddy M and Xu H (2014) Privacy practices in collaborative environments: a study of emergency department staff. In 17th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW), pp 269–282, ACM, Baltimore, MD.
  • Nord GD McCubbins TF Privacy, legislation, and surveillance software Communications of the ACM 2006 49 8 73 78 10.1145/1145287.1145290
  • Ohno-Machado L Silveira PSP Vinterbo S Protecting patient privacy by quantifiable control of disclosures in disseminated databases International Journal of Medical Informatics 2004 73 7 599 606 10.1016/j.ijmedinf.2004.05.002
  • Orlikowski WJ CASE tools as organizational change: investigating incremental and radical changes in systems development MIS Quarterly 1993 17 3 309 340 10.2307/249774
  • Parks R, Chu C and Xu H (2011a) Healthcare information privacy research: issues, gaps and what next. In 17th Americas Conference on Information Systems (AMCIS), AIS, Detroit, MI.
  • Parks R, Chu C, Xu H and Adams L (2011b) Understanding the drivers and outcomes of healthcare organizational privacy responses. In 32nd Annual International Conference on Information Systems (ICIS 2011), Shanghai, China.
  • Peslak AR Internet privacy policies of the largest international companies Journal of Electronic Commerce in Organizations 2006 4 3 46 62 10.4018/jeco.2006070103
  • Pollock N When is a work-around? Conflict and negotiation in computer systems development Science, Technology & Human Values 2005 30 4 496 514 10.1177/0162243905276501
  • Posey C Bennett RJ Roberts TL Lowry PB When computer monitoring backfires: invasion of privacy and organizational injustice as precursors to computer abuse Journal of Information System Security 2011 7 1 24 47
  • Posey C Lowry PB Roberts TL Ellis S Proposing the online community self-disclosure model: the case of working professionals in France and the UK who use online communities European Journal of Information Systems 2010 19 2 181 195 10.1057/ejis.2010.15
  • PRC (2016) Chronology of data breaches security breaches 2005. March 18, 2016, http://www.privacyrights.org/data-breach.
  • Quantin C, Allaert F-A and Dusserre L (2000) Anonymous statistical methods versus cryptographic methods in epidemiology. International Journal of Medical Informatics 60(2), 177–183.
  • Ravera L Colombo I Tedeschi M Ravera A Security and privacy at the private multispecialty hospital Istituto Clinico Humanitas: Strategy and reality International Journal of Medical Informatics 2004 73 3 321 324 10.1016/j.ijmedinf.2003.12.009
  • Rogers EM Diffusion of Innovations 1998 New York Free Press
  • Siponen M and Vance A (2010) Neutralization: new insights into the problem of employee information systems security policy violations. [Article]. MIS Quarterly 34(3), 487-A412.
  • Smith J Privacy policies and practices: inside the organizational maze Communications of the ACM 1993 36 12 104 10.1145/163298.163349
  • Smith JH Dinev T Xu H Information privacy research: an interdisciplinary review MIS Quarterly 2011 35 4 989 1015
  • Strauss AL Corbin JM Basics of Qualitative Research. Techniques and Procedures for Developing Grounded Theory 1998 Thousand Oaks Sage
  • Tadmor C and Tetlock PE (2009) Accountability. In The Cambridge Dictionary of Psychology (Matsumoto D, Ed.), p 8, Cambridge University Press, Cambridge.
  • Urquhart C Lehmann H Myers MD Putting the ‘theory’ back into grounded theory: guidelines for grounded theory studies in information systems Information Systems Journal 2010 20 4 357 381 10.1111/j.1365-2575.2009.00328.x
  • Maanen J Schein EH Toward a theory of organizational socialization Research in Organizational Behavior 1979 1 1 209 264
  • Vance A Lowry PB Eggett D Using accountability to reduce access policy violations in information systems Journal of Management Information Systems 2013 29 4 263 289 10.2753/MIS0742-1222290410
  • Vance A Lowry PB Eggett D A new approach to the problem of access policy violations: increasing perceptions of accountability through the user interface MIS Quarterly 2015 39 2 345 366
  • Walczuch RM Steeghs L Implications of the new EU directive on data protection for multinational corporations Information Technology and People 2001 14 2 142 162 10.1108/09593840110695730
  • Wall JD Lowry PB Barlow J Organizational violations of externally governed privacy and security rules: explaining and predicting selective violations under conditions of strain and excess Journal of the Association for Information Systems 2016 17 1 39 76
  • Warkentin M, Johnston AC and Shropshire J (2011) The influence of the informal social learning environment on information privacy policy compliance efficacy and intention. European Journal of Information Systems 20(3), 267–284.
  • Xu H, Teo H-H, Tan BC and Agarwal R (2009) The role of push-pull technology in privacy calculus: the case of location-based services. Journal of Management Information Systems 26(3), 135–174.
  • Yeh QJ Chang AJT Threats and countermeasures for information system security: a cross-industry study Information & Management 2007 44 5 480 491 10.1016/j.im.2007.05.003

Appendix A

See Table .

Table A1 Summary of data sources

Appendix B

Appendix C

Appendix D

See Table .

Table D1 Illustrative supporting data for unintended consequences

Appendix E

See Table .

Table E1 Illustrative supporting data for intended or expected consequences

Appendix F

See Table .

Table F1 Empirical grounding of the study

Appendix G

See Table .

Table G1 Research process evaluation criteria

Appendix H

See Table .

Table H1 Sample privacy and security assessment tool

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.