2,642
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Surveillance and privacy as coevolving disruptions: reflections on “notice and choice”

ORCID Icon &
Pages 14-26 | Received 08 Nov 2021, Accepted 27 May 2022, Published online: 15 Jun 2022

Abstract

The last two decades have seen a dramatic rise in the ways governments, private industry, and other interests can access, gather, analyze, and employ information about citizens. Given the unique roles of practitioners from both the public and private sectors in this policy ecosystem, we lay conceptual groundwork for coevolving technological, business, and policy disruptions. We focus on the “notice and choice” framework (which we consider a type of nudge) given changing conceptions of privacy in a world of evolving norms. Finally, we consider the difficult problem of clear guideposts for policy actors as they try to reconcile public demands for privacy with competing interests.

1. Introduction

In early 2020, the New York Times documented the capabilities of Clearview AI’s facial recognition software; now deployed in over 600 law enforcement organizations, Clearview expects its database to eventually include all humans (Harwell Citation2022). Other companies had resisted building such a tool but the market responded when startup Clearview moved first (Hill Citation2020). While this may seem intrusive, anyone can perform facial recognition with just an image file and Google – a capability fueled by extremely precise algorithms and fed by massive scraping of images.

This is an example of the dramatic rise in how governments, private industry, and others can access, gather, and analyze information on communities. At the same time, the public now demands services based on those capabilities from both governments and firms. Yet, firms and governments now often encroach upon long-held principles prioritizing personal privacy; today’s governance strategies are rooted in older legal and policy principles that rarely address evolving technologies and practices. As such, emerging technologies and databases have led to calls for regulatory limits on data collection by corporations, individuals, and the government itself – although public opinion is often conflicted on such matters. Regulation will evolve because the surveillance state and surveillance capitalism will grow over time; such disruptions mean governance will be contentious and incomplete.

Indeed, privacy itself can be an elusive concept (Hallinan, Friedewald, and McCarthy Citation2012; Hartzog Citation2021). As Solove has noted, “privacy is not one thing, but a cluster of many distinct yet related things” (Solove Citation2008, 40). In general, and for the purposes of this article, data privacy involves an individual’s appropriate expectation of privacy regarding personal data and control over who has access to it. Conceptually distinct is data protection, which relates to technical control and management over such information (e.g. securing data against unauthorized access, identity and access management) (Park Citation2020, 1458). Our focus here is on the former more than the latter.

Specifically, we offer a new argument about the governance of privacy-related concerns that centers on coevolving technology, business, and policy disruptions. We argue that disruptive business innovations challenge existing regulatory regimes, leading to potential policy disruptions (Jones and Baumgartner Citation2005; May, Sapotichne, and Workman Citation2009a, Citation2009b), perhaps even paving the path for endogenous disruption.

One core regime that emerged over time centered on “notice and choice” (Hoofnagle and Urban Citation2014), which can be thought of as “nudging” people toward deeper consideration of the privacy consequences of technology use. Over time, that regime has come into question, in part due to disruptive technologies that allow for collecting and processing more data. However, those technologies are only important in that they have driven certain business innovations that feed on such data; indeed, these business innovations are themselves disruptions (Christensen Citation1997; Christensen and Raynor Citation2003). In a nutshell, search (e.g. Google) and social media (e.g. Facebook) probably vie for the title of “most disruptive business model” in terms of impacts on personal privacy (i.e. Horgan Citation2020). Of course, such technologies and models are both supercharged by other disruptions like artificial intelligence and big data (e.g. Feldstein Citation2019; Ferguson Citation2020; Friedman Citation2017).

“Notice and choice” is an especially important example of the delicate balance that emerged between fulfilling needs (e.g. safety and consumer satisfaction) and respecting privacy – a balance that has changed as events unfolded in time (e.g. September 11). While many argue that policy should track public opinion (Stimson, Mackuen, and Erikson Citation1995), there are disconnects if people are conflicted about tradeoffs. Opinion about surveillance’s benefits and costs in terms of privacy could act as enabler of or constraint on new forms of governance in this area.

In the section that follows, we make our argument about coevolving technological, business, and policy disruptions. Following that, we then consider public opinion as enabler of or constraint on changing governance regimes. Finally, we consider guideposts for policy actors in their attempt to reconcile public demands for privacy with competing interests.

2. Privacy and surveillance as disruption

Our focus centers on the demand for (and thus, supply of) technologies for the detection of behavior and attributes. “Surveillance” is the “scrutiny of individuals, groups, and contexts through the use of technical means to extract or create information” (Marx Citation2016, 20), which omits words and concepts, such as “observe”, “control”, or “intrusion”. Surveillance is “a regard or attendance to others … or to factors presumed to be associated with these” – it is the “gathering of some form of data connectable to individuals” (Marx Citation2015, 734).

This latitude has contributed to the variety of ways for understanding such a broad social process (Ball, Haggerty, and Lyon Citation2014; Brin Citation1998; Brunton and Nissenbaum Citation2015; Carr Citation2015; Hartzog Citation2018; Jeffreys-Jones Citation2017; Lane Citation2003; Lyon Citation2007; Marx Citation2016; Nissenbaum Citation2010; O’Hara and Shadbolt Citation2008; Schneier Citation2016; Zuboff Citation2019). Even Foucault analyzed the “panoptic” measures taken in a 17th-century crisis: “the major effect of the Panopticon: to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power” (Foucault Citation1995, 201). Accelerated by technological change, surveillance has attracted attention. Social science has long focused on information flows in systems. Surveillance itself is neutral, depending on “context and comportment”, “structures and processes” (Marx Citation2015, 733, 737).

But technological, business, and policy changes have accelerated information flow – leading to more concerns about surveillance and privacy. Naturally, many focus on technology as the main disruption reshaping the balance between public and personal: the expansion of the internet, data collection in information systems, the control of big data and its aggregation (Bélanger Citation2011; Koops and Leenes Citation2014; Krumm Citation2009; Li Citation2011; Pavlou Citation2011; Spiekermann and Cranor Citation2009; Ye et al. Citation2016). While cameras and DNA analysis have changed the nature of surveillance, “strategic surveillance” always involves tools and rules to enhance or limit watching; new tools make it easier to observe and broaden the system’s data bandwidth (Marx Citation2015, 735). Technological disruption has just brought new “material artifacts, software, and automated processes” for observation.

We argue that each technological innovation has driven business innovations that feed on data; it is these business innovations that are truly disruptive (Christensen Citation1997; Christensen and Raynor Citation2003). The human use of technology (not the technology itself) disrupts when it restructures relationships in markets. Not all innovations are disruptive; some disruptive innovations come without a new technology (Biber et al. Citation2017). Obviously, some technologies (e.g. human cloning) carry ethical questions, but most long-run social change comes when disruptive business innovations leverage breakthrough technological innovations (e.g. artificial intelligence) (Christensen, Raynor, and McDonald Citation2015).

Curiously, few scholars have engaged the next question: When does disruptive innovation lead to policy disruptions in basic governance infrastructures? Sometimes mismatches between innovations and governance structures leads to policy responses (Biber et al. Citation2017, 1581–1587). Yet, we know that true policy disruptions change how problems or solutions are conceptualized by shifting attention within a policymaking domain – mostly from external shocks but also from changing power alignments, perhaps because of new ideas or players (Jones and Baumgartner Citation2005; May, Sapotichne, and Workman Citation2009a, Citation2009b).

This framework helps us understand the coevolution of technology, business, and policy. Consider the following example. Worries about intrusive artifacts, software, and processes often lead to protective policies; one example is the “notice and choice” regulatory regime (Hoofnagle and Urban Citation2014). A website offers a user a choice: “The ‘notice’ is the presentation of terms, typically in a privacy policy or a terms-of-use agreement; the ‘choice’ is an action, typically using the site or clicking on an ‘I agree’ button, which is interpreted as the acceptance of the terms” (Sloan and Warner Citation2014, 79). These practices find their origin in pre-internet principles concerning data privacy. Erstwhile privacy scholar Alan F. Westin was among the first to propose and advocate for an informational self-determination framework to protect privacy, asserting that privacy turned on the ability of individuals (and groups) to determine for themselves when and how their personal information is communicated to others (1967, 7). Westin’s influential survey work on privacy preferences of citizens helped establish the “hegemony of the notice-and-choice regime” (Hoofnagle and Urban Citation2014, 262; Kumaraguru and Cranor Citation2005).

This work and others paved the way for the codification of informational privacy policy – beginning with the United States Department of Health, Education, and Welfare’s 1973 report on Fair Information Practice Principles (FIPs). These guidelines echoed the self-determination framework and inspired the development of a number of privacy policy initiatives in the United States and Europe, including the U.S. Privacy Act of 1974 and the European Union’s 1995 Data Protection Initiative, among others (e.g. Park Citation2020). Ultimately, FIPs’ foundational principles that promoted individual self-determination became embodied in 21st-century regulatory frameworks, such as the EU’s General Data Protection Regulation (GDPR), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), and California’s Consumer Privacy Act (CCPA) (e.g. Hartzog Citation2021). Westin’s definition of privacy is reflected in the principle of informational self-determination, first used in Germany for constitutional interpretation purposes and now considered foundational for modern views on data protection and the right to be forgotten (Kodde Citation2016). Yet, many now question this regime because technology allows data collection unanticipated by this 1960s-era model: “Big Data represents a challenge that points to the need for collective and political approaches to self-protection rather than solely individual, atomistic approaches” (Allen Citation2011, Citation2016, 78).

In our view, disruptive business innovations challenge existing governance regimes, and those challenges can lead to potential policy disruptions. We believe that Christensen’s disruptive business innovation is a primary mechanism. Disruptive technologies gain practical meaning when paired with disruptive innovation by firms. However, firms understand that policy affects profitability (Baron Citation1995); consequently, disruptive innovation should affect policy disruption. Indeed, endogenous disruption can happen if policy then feeds back into the technology-business cycle.

Returning to our example, while the “notice and choice” system limits some data collection opportunities, it also naturally creates other opportunities for new technologies that limit the system’s effectiveness. As one summary statement of global regulatory regimes for surveillance and privacy notes, “since 2006, much has changed in the real and virtual worlds in which privacy governance was implicated, in the academic world that studied them, and in the policy world that aimed to regulate them” (Bennett and Raab Citation2020, 448). While “notice and choice” evolved from Westin’s seminal 1967 book Privacy and Freedom (Westin Citation1967), over time many technologies and business models emerged to circumvent this regime. Policy instruments evolved in response to firm-level disruptive innovation (Bennett and Raab Citation2017, Citation2020), but previous national and transnational instruments also spurred such innovation.

For instance, a notable example is how major technology firms now use “dark patterns” to circumvent Europe’s GDPR (Forbrukerrådet Citation2018; Nouwens et al. Citation2020). “Dark patterns” are design features that nudge or steer users into doing things that are profitable but often harm users (Chopra Citation2020). Consider the range of technological tricks available for getting individuals to reveal personal information:

Typically, digital tricks and traps work in concert, and dark patterns often employ a wide array of both. Dark pattern tricks involve an online sleight of hand using visual misdirection, confusing language, hidden alternatives, or fake urgency to steer people toward or away from certain choices. This could include using buttons with the same style but different language, a checkbox with double negative language, disguised ads, or time pressure designed to dupe users into clicking, subscribing, consenting, or buying. (Chopra Citation2020, 1)

While technologists have deployed dark patterns since at least 2010, only recently have regulators like Chopra focused on their impact on users’ appropriate expectations of privacy. This is not surprising. As Hartzog notes, most modern data privacy rules are essentially designed to promote individualistic notions of privacy (i.e. autonomy). They are not built to disrupt power disparities between corporations and people or promoting the collective wellbeing of a diverse population (Hartzog Citation2021, 1683–1684).

Effectively, technological change supports new and disruptive business models; whether policy change follows depends on how users, consumers, and communities work with policy makers to challenge those innovations. As Marx said, while technology changes the terms of trade, the “context and comportment” of those threats remain key. Do users, consumers, and communities know enough about all this to push regulators toward new governance models? Therefore, we next discuss nuances in public opinion about privacy and surveillance to help us better understand the next round of potential policy disruptions.

3. Public opinion as enabler or constraint?

We first make two observations about why this is important. First, we reiterate how public opinion research has shaped our understanding of privacy and surveillance. As noted, Westin’s survey research helped establish the notice-and-choice regime as the core framework for consumers navigating innovative business models dependent on personal data (Hoofnagle and Urban Citation2014; Kumaraguru and Cranor Citation2005; Westin Citation1967). Between 1978 and 2004, Westin conducted over 30 surveys studying general privacy-related questions and also more specific topics like consumer and medical privacy (Kumaraguru and Cranor Citation2005). The indices he created from those survey questions formed the basis how we now understand the topic: that respondents fell into three groupings in their views of privacy (as fundamentalists, pragmatists, or the unconcerned). Over time, Westin argued that “show me and let me decide” came to prevail among the public, and that “surveys show that consumer privacy concerns have not been lessened by 9/11” (Westin, quoted in Kumaraguru and Cranor Citation2005, 18). Regardless of academic concerns about those studies (Gandy Citation2003; Kumaraguru and Cranor Citation2005, 19), what he learned about public opinion then helped create “almost single-handedly, the modern field of privacy law” (Fox Citation2013, D7).

Second, even though this policy space is rooted in public opinion research, it is important to consider the context within which such analyses occur. We can see this at the micro level in O’Hara and Shadbolt’s Citation2008 book about the inevitable “spy in the coffee machine” that “might sense when coffee had been poured and then send a message to the car ignition”, starting a chain of signals and events information that can be monitored “telling the world what is going on in your home” (O’Hara and Shadbolt Citation2008, 8–9). One study went further by describing exactly what happened when a member of an academic department installed a closed-circuit camera in the departmental kitchen to help enhance cleanliness (Ullmann-Margalit Citation2008). People voiced a rich set of concerns before the camera was removed. Responses ranged from claims about reasonable expectations of privacy, to statements that people should have nothing to hide, to debates about shame and shaming sanctions, and even to questions about the appropriate mechanism for making collective decisions about such interventions. Even the “spy in the coffee machine” was part of bigger debates about how we constrain and regulate antisocial behavior in a world of new technologies. Again, it is all about “context and comportment”.

Public opinion often sets the stage for new policy or legislative interventions (Dalton Citation2020; Jones and Baumgartner Citation2005; Mansbridge Citation2003; Stimson, Mackuen, and Erikson Citation1995). Almost like how a thermostat works, “[t]he public makes judgements about current public policy … that will change as policy changes, as real-world conditions change, or as ‘politically colored’ perceptions of policy and conditions change. And as the simple model indicates, politicians and government officials sense these changes in public judgment and act accordingly” (Stimson, Mackuen, and Erikson Citation1995, 544). We can debate whether administrators or other policymakers accurately reflect the public’s preferences as if the public has full and complete information (Box Citation1992; McCrone and Kuklinski Citation1979).

Given that even academics debate the balancing of benefits and costs in surveillance and privacy, does public opinion enable or constrain policy disruption in this area? Because context is key, we restrict our attention to cases from the USA and Australia; clearly each community is different, but we believe our focus draws a useful picture of public opinion as enabler or constraint.

There is some public consensus in certain areas. First, the public is concerned about whether surveillance and the collection and leveraging of personal data have diminished privacy. A 2019 Pew Research Center study found that 79% of American adult respondents were either “very” or “somewhat” concerned about how companies are using the data they collect about them; 64% indicating they were either very or somewhat concerned about government data collection efforts. Seventy percent reported feeling that their personal data was less secure today than it was five years ago. Finally, 75% think there should be more government regulation of what companies can do with personal data; this was true for Democrats (81%) and Republicans (70%) (Auxier et al. Citation2019).

A 2017 Australian survey echoed this. The vast majority was not comfortable with targeted advertising based on search results or with data collection by social network companies. Only 33% were willing to share personal information for commercial benefits. E-commerce (19%) and social media industries (12%) were among the lowest scorers among entities examined (in contrast, financial institutions scored 59%) (Wallis Consulting Group Pty. Ltd. and Office of the Australian Information Commissioner Citation2017).

However, citizens also noted that intrusive surveillance and personal data collection are often in exchange for governmental national security and useful services by companies. Yet, not all tradeoffs are considered equal. In both surveys, there is a substantial gap between peoples’ willingness to exchange government surveillance and data collection actions for security versus their willingness to exchange similarly intrusive actions for corporate services (Auxier et al. Citation2019; Wallis Consulting Group Pty. Ltd. and Office of the Australian Information Commissioner Citation2017).

Consider the context. People want privacy but what they do when deciding to make a specific tradeoff (e.g. consenting to data collection for an immediate commercial benefit) may not be entirely without cognitive bias or compromised capacity. In fact, many people may simply lack the requisite tools and wherewithal to resist bargaining away their privacy. One survey on digital privacy asked Americans if “they feel that they understand the laws and regulations that are currently in place to protect their data privacy.” The vast majority (63%) said either “very little” or “not at all” (3% said “a great deal”; 33% said “some”). Most said they had little to no knowledge about what was being done with the data collected about them by government or companies. Most Americans believe it is impossible to go through daily life without having their data collected by companies (62%) or the government (63%) (Auxier and Rainie Citation2019). The Australian survey echoed this regarding citizens’ practical abilities to ensure privacy of their personal data (Wallis Consulting Group Pty. Ltd. and Office of the Australian Information Commissioner Citation2017).

Many people are “privacy pragmatists” (in Westin’s phrasing), but few people have broad and deep knowledge of actual privacy protections and processes (Hoofnagle and Urban Citation2014, 298). Knowledge and information are strongly correlated with views – and knowledge and information are low (Gandy Citation1993). Consequently, even if people understand data protection laws and actually read notice and choice terms of service (most do not (Auxier and Rainie Citation2019)), there is still the concern that “watchers” can still manipulate privacy consent.

4. Notice, choice, and nudges

Consequently, we are faced with a policymaking dilemma. The public has legitimate concerns about data privacy and how their personal information is acquired and used by government and private entities; at the same time, the public may not have the requisite information, wherewithal, or capacity to resist bargaining away their informational lives. How is such a situation to be resolved? Even if the deck of disruption is stacked against the user, can mechanisms like notice and choice provide real protection?

Specifically, are such situations ripe for expanded “nudging” strategies (Sunstein Citation2014; Thaler and Sunstein Citation2009)? On one hand, research suggests that consent can be manipulated by designing the choice architecture; in privacy opt-in (or opt-out) choices, test subjects were strongly influenced by default choice treatments (manipulations) offered in notice and consent policies (Johnson, Bellman, and Lohse Citation2002). Yet, nudge-type policies may promote individuals’ behavior to protect their own privacy (Coventry et al. Citation2016). In that study, respondents were asked if they would accept a cookie from a website. Respondents were not warned that cookies can be used to track their internet behavior and collect user-specific information, but they were given a prompt (a treatment) regarding cookie acceptance. One group was told that most of their peers had accepted the cookie, another group was told few had accepted it, and a third group received no prompt. The study found that users in the minority treatment were much less likely to accept the cookie. Personality assessments showed that the behavioral nudge could attenuate the impulsivity and risk-acceptant behavior by individuals. A host of other studies also have shown both positive and negative aspects of nudges on privacy-related choices (Acquisti Citation2009; Acquisti et al. Citation2018; Choe et al. Citation2013; Dogruel Citation2019; Monteleone et al. Citation2015).

Nudges are used extensively in the public sector, but their ethical use is not settled (Bovens Citation2009; Caporale Madi Citation2019; Sunstein Citation2016; White Citation2013). Indeed, we know that one of the collective benefits of privacy is to enhance personal autonomy. As such, nudges (for good or bad reasons) are potential intrusions on individual privacy and incompatible with autonomy and libertarianism (Kapsner and Sandfuchs Citation2015). Yet, people find these choice environments hard to navigate the because of the information problems associated with new technologies.

Consider again “notice and choice” oriented frameworks for protecting citizen privacy. As a legacy of Westin’s research, such policy designs do not adequately provide meaningful privacy protections. A more proactive, deliberate, and sophisticated approach is needed. That approach may help people make the choices they would make about privacy if they had better information and optimal conditions. We know that nudge-oriented strategies in regulatory policymaking that lead citizens toward “better” behaviors remains compelling (Thaler and Sunstein Citation2009). Such regulatory policies may help secure citizen privacy when corporations will use technologies to push data collection as far as citizens will allow. In fact, public opinion in many cases may act more as constraint on than enabler of new governance approaches. However, doing so means grappling with what Aaron Wildavsky posed as an eternal question in governance: “what is permissible so that this people may survive?” (Wildavsky Citation1989).

5. Conclusion

We began with two stylized facts: the dramatic rise in the ways governments, industry, and others access, gather, and analyze information on citizens; and growing public demand for services provided by governments and industry involving the collection and assembly of individuals’ personal information. The balance between respecting privacy and fulfilling needs like security or consumerism is always tenuous. Yet, the convergence between ability to surveil and demand for information creates tension in systems charged with governing and administering principles that prioritize privacy. Because it is difficult to put the genie back in the bottle, those threats will remain in the spotlight for many years.

Disruptive technologies like surveillance are problematic when disruptive business innovations see profit opportunities in their application. When rules create barriers that then create premia for information, well, as Jeff Bezos has said “your margin is my opportunity”. Business models emerge and disruptive change sets the stage for potential policy disruptions, and policy changes then also create barriers for future business strategies to overcome or take advantage of. In the case of privacy, the dominant “notice and choice” regime that evolved over time due to the work of Alan Westin has played such a role; new and perhaps more intrusive methods of data collection and extraction (e.g. dark patterns and machine learning) are on the horizon.

Westin’s public opinion research should be recognized, as also should the growing evidence of just how little average people understand the stakes in battles over privacy. Public opinion data show people understand tradeoffs, but they may not understand the terms of trade – the contexts and processes that determine the efficacy of recent technological and business innovations.

This is one reason why it is so difficult to chart a way forward in this area. Perhaps the most intriguing “solution” to the problems described here is the use of nudges to encourage more self-protective behaviors by users, consumers, and communities. However, even the use of nudges to shape the choices people make when navigating the second half of the “notice and choice” approach leads to strong ethical debates about the meaning of privacy itself in a world where protecting autonomy requires taking advantage of others’ ingrained cognitive biases. Debates over nudging will continue – if only because the “dark patterns” that emerged as a business response to “notice and choice” are themselves a nudge, just deployed for the purposes of the business model.

Similarly, debates about surveillance and appropriate regulation will continue, if only because modern governance is fueled by information and its access depends on the use of watching techniques that often impinge on privacy. In this sense, multi-century debates about the Panopticon or the “gaze” have become almost synonymous with our broader understanding of modern governance. This does not diminish the importance of improving our understanding of surveillance and other threats to privacy. Instead, this confluence of disruptions – technological, business, and policy – pushes us to better understand the long-run evolution of such forces. In doing so, we may improve our overall comprehension of the diversity and breadth of such questions for modern governance, and what falls outside those conversations, for it may be that in this era surveillance (like software) “has eaten the world”.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References