223
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Reimagining the inevitable: how metaverse imaginaries construct understandings of privacy and surveillance

ORCID Icon & ORCID Icon
Received 09 Mar 2023, Accepted 20 May 2024, Published online: 31 May 2024

ABSTRACT

Previous research on consumer privacy has heavily emphasised an individualistic perspective, focusing on protection against individual harm and the importance of privacy self-management. In today’s data-driven society, understanding the collective nature of privacy and its connections with increasing surveillance and larger societal harms is important. By employing media analysis of textual data, we explore how sociotechnical imaginaries of the metaverse constitute perceptions of surveillance and privacy. The findings identify four rhetorics that underpin metaverse imaginaries, with some reinforcing the prevailing surveillance ideology (inevitability and technological solutionism) and some challenging it (decentralisation and collaboration). This article contributes to the privacy literature in consumer research by illuminating how privacy is given sociocultural meaning as part of the surveillance ideology and by extending the literature on the problematisation of individualised consumer privacy. We argue that the way these imaginaries are constructed has implications for how privacy and surveillance will unfold in the future.

Introduction

The topic of privacy has received frequent attention in consumer research (Martin and Murphy Citation2017; Scarpi, Pizzi, and Matta Citation2022). As the amount of data collected on consumers continues to accumulate, protecting consumer privacy while creating new, more personalised experiences (Cloarec Citation2020) is becoming a crucial issue for marketers. Although many countries have regulated data privacy, consumers are still ambivalent about the issue. For instance, in a worldwide survey of internet users from 2023, 83% of respondents said that they would like to take more actions for online privacy protection (Statista Citation2023a). However, a high share of consumers are also willing to accept online privacy risks for a more convenient life (Statista Citation2023b). At the same time, companies’ use of new technologies, such as artificial intelligence (Vlačić et al. Citation2021), raises novel considerations for consumer privacy. Thus, understanding privacy issues from many perspectives is important.

Still, most previous studies on privacy in marketing and consumer research have been directed at studying the concept from microeconomic and psychological perspectives and have focused on measuring behavioural and attitudinal factors (Cloarec Citation2020; Martin and Murphy Citation2017; Stewart Citation2017). Concepts such as the privacy calculus, which refers to rational cost-benefit analyses (Beke et al. Citation2022; Culnan and Armstrong Citation1999), or the privacy paradox, which refers to the gap between privacy attitudes and behaviour (Norberg, Horne, and Horne Citation2007; Scarpi, Pizzi, and Matta Citation2022), have dominated the consumer privacy literature. Most of this research has been conducted with the objective of discovering ways to predict how consumers will behave in various privacy-related situations by uncovering causal relations between factors, such as trust and privacy concerns (Bleier and Eisenbeiss Citation2015) or technical knowledge and paradoxical behaviour (Barth et al. Citation2019). This approach, though, ignores the larger sociocultural systems within which these behaviours are shaped (Horppu Citation2023; Sörum and Fuentes Citation2023). Additionally, the collective nature of privacy (L. Taylor, Floridi, and van der Sloot Citation2017) and its connections with society’s datafication, surveillance capitalism (Zuboff Citation2019) and surveillance culture (Lyon Citation2019; Strycharz and Segijn Citation2022) have remained understudied in marketing and consumer research.

This paper’s empirical data originate from discussions on the emergence of the metaverse as a new technology impacting businesses and consumers. The metaverse is a vision of the future in which the internet takes a 3D form and becomes an immersive virtual world facilitated by augmented and virtual reality (McKinsey & Company Citation2022). In the most radical visions, an increasing amount of our work, hobbies and social relationships will take place in a virtual environment. Multiple players, including Meta (formerly Facebook), are heavily investing in the development of the needed new technologies. These technologies, such as VR glasses, enable all-encompassing data capturing, including the tracking of posture, eye movement and other biometric data (McKinsey & Company Citation2022), reaching a new, very intimate level of data collection.

The metaverse is a fruitful context for privacy research, as it is not yet fully materialised in the everyday lives of consumers but entails potentially significant changes in how privacy issues will be managed and governed in the future. Furthermore, while marketing research has seen increasing interest in the metaverse and its opportunities for marketers (Barrera and Shah Citation2023; Dwivedi et al. Citation2023), the sociocultural implications for privacy have not been addressed. This paper examines the metaverse through media content as an entry point. By mobilising the concept of sociotechnical imaginaries (Jasanoff Citation2015), we explore how imaginaries of the metaverse constitute perceptions of surveillance and privacy. More specifically, our research question is as follows: How do metaverse imaginaries rhetorically support or contest the ideology of surveillance and, in connection with that, construct understandings of privacy?

By addressing these questions, our work contributes to privacy research in the marketing literature by developing privacy as a sociocultural phenomenon. In particular, we explore how the sociocultural understandings of privacy are embedded within the prevailing ideology of surveillance. We show that, in order to reimagine the current data practices in society, consumer privacy conceptualisations should go beyond individual consent and control. Rather, attention should be paid to the collective level surveillance and its harms to society at large. Our focus is on imaginaries, which are not only discursive but also performative in co-constructing and realising the future (Mager and Katzenbach Citation2021). They draw from cultural norms and values and are shaped within cultural contexts, thus resulting in visions that guide decision-making and orient focus to things that matter (Lupton Citation2020). Therefore, it is important to understand how imaginaries take shape and how certain technological directions become legitimised and lead to collective understandings of how the future should look (Preece, Whittaker, and Janes Citation2022).

Literature review

Privacy as an individual versus social value

The most traditional meaning of privacy is “the right to be let alone” (for an extensive discussion on the meanings of the concept, see Roessler and DeCew Citation2023). It is often addressed as being tied to the individual, as its protection is approached by discussing the threats and harm to individual rights, autonomy, freedom and identity (Mulligan, Regan, and King Citation2020; Taylor, Floridi, and van der Sloot Citation2017). Based on this, the protection of privacy is seen as the responsibility of the individual, and focus is placed on privacy self-management (Hull Citation2015), which often culminates in notice and consent regimes (Hull Citation2015; Mulligan, Regan, and King Citation2020). The concept of notice and consent revolves around providing individuals with comprehensive information about how their data are collected and used. Armed with this knowledge, they are then expected to make informed decisions about granting or withholding their consent. However, scholars and other critics of the current data environment have highlighted a multitude of issues related to consent as a mechanism of privacy protection (Carmi Citation2021; Hull Citation2015; Mulligan, Regan, and King Citation2020; Obar and Oeldorf-Hirsch Citation2018).

As Hull (Citation2015) discusses, individuals do not and cannot know what they are in fact consenting to. This is, first, because privacy statements are complicated and difficult to read and understand. They are written in a way that nudges consumers towards disclosing their data without real consideration of the consequences; thus, they are “manipulative by design” (Mulligan, Regan, and King Citation2020, 768). In general, platforms encourage information disclosure by framing it as a way of contributing to the public good (Mulligan, Regan, and King Citation2020) and by connecting it with notions of convenience and well-being (Bettany and Kerrane Citation2016). Second, at the time of consent, it is impossible to know all the future uses of the data and understand how various data traces can be combined from different sources. Furthermore, withholding consent is not an actual viable option, as it would lead to individuals being denied the use of services that are crucial for the practices and relationships in both their professional and personal lives (Hull Citation2015).

Privacy self-management is also problematic because of what Barocas and Levy (Citation2019) call “privacy dependencies”. Privacy dependencies refer to the way in which privacy is dependent on the decisions and disclosure practices of others (Barocas and Levy Citation2019). In fact, the disclosure or withdrawal of information can reveal more about others than the person themself through three mechanisms: social ties and relationships with others, similarities to others and differences from others. This highlights the issue of fighting privacy threats by focusing only on increasing individuals’ control over their own data (Mulligan, Regan, and King Citation2020). Connected to this, it further underscores the limitations of relying solely on consent and individual decision-making in the context of privacy, as it is unreasonable to expect that, in their decisions, individuals would also take the interests of others into account (Barocas and Levy Citation2019).

Nevertheless, in privacy research, the emphasis has predominantly centred on consent, resulting in privacy being understood as an individual economic choice regarding information disclosure. Individual consumers are the ones who must manage the risks related to privacy, and the possible poor management of these risks is their responsibility (Hull Citation2015). However, a focus on privacy protection through and as the responsibility of the individual overlooks the social value of privacy. As suggested by Regan (Citation1995), privacy does not offer value only to individuals but also to society in general. Privacy is needed for individuals to flourish, as people need space that is free from the intrusiveness of others (Solove Citation2007) to have room for self-making (Cohen Citation2013). Moreover, in society, privacy plays a crucial role in the exercise of rights that are integral to democracy, such as freedom of speech. Additionally, it serves as a safeguard against the arbitrary power of government (Regan Citation1995).

Privacy is also a requisite for innovation, even though these two are often depicted as adversaries. Innovation stems from having the space to experiment and explore, a condition that is hindered when surveillance and behaviour modification shape and homogenise preferences and actions (Cohen Citation2013). From this perspective, we should not approach privacy as being at odds with the larger interests of society but as an intrinsic element of society (Solove Citation2007). Hence, privacy breaches may cause harm to individuals, yet the repercussions extend beyond individuals to encompass society as a whole and its fundamental values.

Overall, privacy is needed in society for various aspects, including health, democracy, fair competition, financial services and innovation (Cohen Citation2013; Mulligan, Regan, and King Citation2020; Regan Citation1995). To fully start to regard privacy through its social value and as a core pillar of society and its functions, it is beneficial to shift the focus from individuals and explore the concept through the lens of group privacy (L. Taylor, Floridi, and van der Sloot Citation2017). Group privacy is based on the notion that, within Big Data mechanisms, the interest is not on individuals but rather on the groups to which they belong (Floridi Citation2017). Big Data analytics increasingly collects large amounts of data, not about specific individuals, but to represent large groups of people. These aggregated data sets are then used to create segmentations and classifications that can, in turn, lead to larger societal issues, such as discrimination or the spread of disinformation (Mulligan, Regan, and King Citation2020).

Issues related to privacy on the societal level arise due to these aggregated data sets and the data collection that occurs on the collective level. Surveillance refers to the systematic extraction of personal data in large quantities, which, when grouped together, can be used for behavioural modification of not just individuals but also the masses. Therefore, within the discussion of privacy, we need to zoom out from the individual and their decision-making processes and instead focus our efforts on dissecting and challenging the mechanisms of surveillance.

Surveillance in modern society

The current modern reality is characterised by datafication (Mayer-Schönberger and Cukier Citation2013) – the transformation of humans, their actions and relationships into quantified data. Big Data is often imagined as all powerful (Beer Citation2019), but the issues and harms that its collection can cause have also been widely noted (e.g. Andrejevic Citation2014; Citation2017; van Dijck Citation2014; Zuboff Citation2019). The integration of datafication mechanisms into the functioning of society has been discussed through concepts such as surveillance society, surveillance capitalism (Zuboff Citation2019) and surveillance culture (Lyon Citation2019). Early on, the discussion on surveillance focused specifically on the ways in which the state monitored its citizens mainly for security reasons (Dencik Citation2018). However, nowadays, most surveillance is conducted by corporations, and its mechanisms are ingrained at the very core of our everyday lives and relationships. These mechanisms are largely criticised for the ways in which they construct power imbalances between those who are collecting data and those whose data are being collected (Andrejevic Citation2014; West Citation2019).

One of the most well-known critics of these mechanisms is Shoshana Zuboff (Citation2019), who refers to the data regime as surveillance capitalism. She discusses how human experience is quantified and transformed into prediction products that can be sold to entities interested in not only predicting but also modifying human behaviour through mechanisms such as hypernudging (Yeung Citation2017). In hypernudging, the ultimate goal is to affect consumers’ decision-making on the unconscious intention level, which leads to not only knowing but also co-creating the consumer subject (Darmody and Zwick Citation2020; Yeung Citation2017). The initial conformity to these mechanisms of surveillance was fuelled by the neoliberal economic and political landscape, including a growing demand for individualisation and personalised experiences.

Surveillance is constant and all encompassing, as multiple data sources are used (Zuboff Citation2019). Indeed, organisations are guided by a data imperative (Fourcade and Healy Citation2017) that suggests that successful businesses extract “all data, from all sources by any means possible” (Sadowski Citation2019, 2). Furthermore, surveillance cannot be escaped, as data are collected without permission and without consumers having the ability to understand how and when this collection occurs (Andrejevic Citation2014; Zuboff Citation2019). Indeed, Sadowski (Citation2019) asserts that data accumulation should rather be understood as data extraction, as data are taken without meaningful consent and fair compensation to its subjects. In this way, surveillance becomes normalised and ingrained in society and its operations in the form of an ideology.

This ideology can pose threats to “activities central to democracy, including peaceful protests and religious worship, and core tenets of democracy, including fairness and non-discrimination” (Mulligan, Regan, and King Citation2020, 770). These societal-level harms are ignored when approaching privacy through an individual perspective that highlights notice and consent (Dencik Citation2018; Mulligan, Regan, and King Citation2020). Moreover, emphasising privacy protection as the responsibility of the individual downplays the importance of wider coordinated protection through legislation and regulation. In this way, privacy can become not a countermeasure but the “partner-in-crime” of surveillance (Coll Citation2014). Therefore, adopting a collective-level perspective on privacy is needed to challenge the ideology of surveillance.

Imaginaries of surveillance

Taking a slightly different perspective from Zuboff, Lyon (Citation2019) approaches the current data regime through surveillance culture. According to this view, individuals are not just passive victims of the system but instead play an active role in constructing, strengthening and opposing a culture of surveillance. Emphasis is placed on how individuals experience surveillance in everyday life and negotiate its meanings and associated practices, as well as on how people themselves partake in the surveillance of others (Lyon Citation2019). Within everyday life, the responses to surveillance take many forms and are more ambiguous than just pure acceptance or resistance (Lupton Citation2020; Lyon Citation2019; Sörum and Fuentes Citation2023). Lyon (Citation2019) discusses the experience of and engagement with surveillance through surveillance imaginaries and practices. Surveillance imaginaries involve the ways in which people envision the world of surveillance and their part in it, while surveillance practices include the everyday practices of both taking part in surveillance and protecting oneself from it.

Within a culture, there are social imaginaries in circulation through which the meanings and understandings related to data collection and surveillance are formulated (Sörum and Fuentes Citation2023). These cultural-level imaginaries take their shape through a discussion among various parties, such as academics, businesses, industry experts and journalists, and are disseminated via traditional and social media as well as popular culture (Jasanoff Citation2015; Lyon Citation2019). These then inform the individual-level imaginaries that help individuals situate themselves within the culture of surveillance (Lyon Citation2019; Sörum and Fuentes Citation2023).

In this study, the concept of a sociotechnical imaginary is used as an analytical tool that allows for dissecting how, on a cultural level, the ideology of surveillance gains prominence and how understandings of privacy are intertwined with it. Sociotechnical imaginaries refer to collective visions of desired futures and focus on how these futures can be achieved through advances in science and technology (Jasanoff Citation2015). Accordingly, in this study, we analyse how the future of surveillance and privacy is understood through an advancement in technology – the metaverse. Thus, compared to C. Taylor’s (Citation2004) social imaginaries, the study of sociotechnical imaginaries specifically emphasises the ways in which the material world is intertwined with the social (Jasanoff Citation2015). Imaginaries are performative; they are ingrained with normative notions of the future and guide research, innovation and consumers’ subjectivities (Jasanoff Citation2015; Mager and Katzenbach Citation2021; Preece, Whittaker, and Janes Citation2022). Through their focus on action and materialisation through technologies, imaginaries differ from discourses, which more specifically focus on language (Jasanoff Citation2015).

Mager and Katzenbach (Citation2021) highlight three characteristics of sociotechnical imaginaries. First, they are multiple meaning that at one time, there are always multiple imaginaries in circulation, some more powerful than others. Connected to multiplicity, sociotechnical imaginaries are also contested. This means that these multiple imaginaries might sometimes conflict and thus fight for dominance (Mager and Katzenbach Citation2021). The fact that imaginaries are multiple and contested implies greater complexity and dynamism compared to ideologies, which are more directly linked to power and social structures and, therefore, are more deeply rooted (Jasanoff Citation2015). Lastly, imaginaries are commodified. This suggests that, in many instances, it is not state actors but rather corporate players that construct and bring forth these imaginaries. Usually, they do this from a utopian perspective, painting a picture of a better tomorrow that can be achieved through their products (Mager and Katzenbach Citation2021).

In this study, imaginaries are perceived as both multiple and dynamic, signifying their continual negotiation among various parties. Furthermore, they are considered performative through their concrete repercussions on how the future of surveillance and privacy materialises on both the individual and societal levels. By analysing imaginaries, we can tap into the cultural-level understandings of how the future is set to unfold. The study of imaginaries is important because, through collective imagination, we can create alternatives to prevailing cultural understandings and established practices in society, thus constructing an idea of what is possible.

Data collection

The media plays a definitive role in social life, as individuals’ perceptions of issues are shaped by the way in which they are framed in the news media (Altheide and Schneider Citation2013). The media documents can be studied as representations of social meanings (Altheide and Schneider Citation2013), and accordingly, the media and popular culture play a focal role in producing, strengthening and stabilising imaginaries (Jasanoff Citation2015). For this reason, this study adopts media analysis as its methodological approach.

The first author collected the data with the help of Media Cloud (www.mediacloud.org), an open-source media research project with a searchable global database of over 60,000 news media sources. It is commonly used by researchers, especially in the communications and media studies fields, to understand the online media ecosystem (www.mediacloud.org). Its large database was one of the reasons that Media Cloud was chosen as a source for data collection. Additionally, the platform is easy to use and offers valuable insights and visualisations about the data.

The search was limited to media outlets in the United States and the United Kingdom because we wanted to collect English-language data and focus on larger outlets that are likely to also reach a wide audience elsewhere in the world. The data were gathered from an 18-month period starting in April 2021. This timeframe is anchored around the announcement made by Facebook in late 2021 regarding its rebranding and the unveiling of its new name, Meta. This rebranding announcement resulted in the metaverse becoming a buzzword in the media; therefore, this period was thought to offer a fruitful basis for data collection. However, it should be emphasised that, even though Meta has a strong presence within the data set, the metaverse is also discussed more generally, and the analysis does not focus on Meta’s metaverse per se. A diverse set of outlets was included, such as major general news publications as well as technology blogs and other opinion pieces written by journalists or experts discussing the metaverse. Examples include Forbes, Wired, Business Insider and Venture Beat. As this search already resulted in a large amount of data and included outlets with global reach, we decided not to extend the geographical coverage to other English-speaking countries. We also decided not to narrow the search to only certain outlets, as we wanted to include various ideological positionings to gain a more comprehensive picture of how the metaverse imaginaries are constructed.

The keywords used were “privacy” and “metaverse”, which could appear anywhere in the text. The initial search resulted in 1,012 unique stories, and as a first step, based on headlines, irrelevant stories that did not contain journalistic content, such as stock news, were dropped. After this, the articles were manually screened to find those relevant to the study’s topic. This meant finding articles that focused specifically on data issues, surveillance and privacy in the metaverse, rather than those only mentioning these issues in a single sentence. This elimination resulted in a final sample of 95 articles.

Data analysis

In the first analysis phase, the textual data were read through and coded by the first author using Atlas.ti. In this inductive coding, the focus was on recognising the ways in which the metaverse was portrayed as either a threat or an opportunity for privacy and on issues of and solutions to privacy questions. In the second stage, both authors participated in deepening the data analysis with the help of the analytical concept of sociotechnical imaginaries (Jasanoff Citation2015; C.Taylor Citation2004).

The study’s focus was on privacy, but we wanted to explore it as part of the larger context of surveillance, which is why the literature on surveillance (Lyon Citation2019; Zuboff Citation2019) was also used to interpret the data. By drawing on this literature and the connected discussion on privacy as individual protection versus social value (Hull Citation2015; Regan Citation1995; Solove Citation2007), the analysis was narrowed down to recognising the ways in which the imaginaries construct perceptions of surveillance and privacy. More specifically, the final analysis delves into four different rhetorics through which the surveillance ideology is supported or challenged within the metaverse imaginaries. In this line, emphasis was placed on how privacy is understood within these rhetorics and how this possibly plays into amplifying or contesting surveillance.

Findings

Rhetorics supporting the surveillance ideology

The metaverse imaginaries that provide support for the larger surveillance ideology are built upon two distinct rhetorics. The first is inevitability, within which the materialisation of the metaverse – and connected to it, increased surveillance – is considered inevitable. This inevitability is further strengthened by highlighting the agency of technology that overpowers individuals and results in feelings of helplessness. The second rhetoric is technological solutionism, which posits technology as a silver bullet for remedying issues related to surveillance. Within these rhetorics, privacy continues to be examined from an individual perspective, with a focus on questions of control and consent.

Inevitability

As Zuboff (Citation2019) points out, the rhetoric of inevitability is one of the main mechanisms upon which surveillance capitalism is built. In technological discussions, a universal agreement prevails that, in the future, everything will be “connected, knowable and actionable”. This agreement is taken for granted as an article of faith (Zuboff Citation2019, 223). According to Zuboff (Citation2019), within the tech community, the inevitability rhetoric has reached the stage of an actual ideology: inevitabilism. The notion of inevitability is built upon dramatic, even prophetic, framings of new disruptive technologies (Belk, Weijo, and Kozinets Citation2021, 32), the materialisation of which is often described as a new era, phase or stage (Zuboff Citation2019, 222). Similarly, within the analysed metaverse imaginaries, the rhetoric of inevitability is strong, as the manifestation of the metaverse and its technologies is depicted as unavoidable, using dramatic tones. The metaverse is believed to usher in a new era of the internet and social life in general – for better and for worse. The following quote demonstrates this:

While questions remain, what we do know is that it's coming. Arguably, it already exists. (…) We know it's going to be a place where people will interact with each other, where they will buy and sell goods and services, where communities will form around education, culture, entertainment, and faith, and where the traditional boundaries of personal data, property, and privacy will be thrown wide open. (Media article #44)

Here, the metaverse is cast as something that will profoundly transform our lives, including commerce, education and culture, and that will have serious implications for privacy by “throwing wide open” all boundaries related to it. The inevitability narrative is strong here because, while doubts about the specifics of the metaverse remain, the story claims that “we do know [emphasis added] it’s coming”.

As is common for the inevitability rhetoric, the idea that technological progress is necessarily followed by increased surveillance (Zuboff Citation2019) also holds true in this context. Thus, not only technological development but also surveillance is depicted as inevitable. This rhetoric draws from the current reality in which data and surveillance practices have become firmly embedded into the aspects of everyday social, cultural and political lives (Dencik Citation2018). These practices have become normalised, and this normalisation gains vigour through media framings of data extraction as serving the common good as well as providing safety and competitive advantage (Cohen Citation2013; Mulligan, Regan, and King Citation2020). The rhetoric of inevitability draws from these cultural understandings and further feeds into them. Indeed, intensified surveillance within the metaverse is rendered as inevitable, as ever-more intimate data can be gathered through, for example, the tracking of eye movements, posture, facial expressions and heart rate. The following quote illustrates this:

Metaverse platforms will be able to track where you go, what you do, where you look and how long your gaze lingers, your gait; they'll look at your posture and be able to infer your level of interest. They'll monitor your facial expressions, vocal inflections, vital signs, blood pressure, heart rate, blood flow patterns on your face. These extensive profiles will make the amount of information that the social media companies get seem like the good old days. (Media article #87)

This argument is driven home by contrasting metaverse surveillance with social media surveillance, which is the present-day reality and, as such, already causes anxiety today. The notion that surveillance in the metaverse will expand into new, increasingly personal facets of our lives permeates the analysed data, making it difficult to envision a different reality.

This rhetoric is further strengthened by discussions of technology as having agency independent of humans. Numerous instances were found in which the (human) entities that are responsible for using the various technologies are not mentioned. Instead, within these visions, technologies act independently without humans having the ability to oppose or control them: “Metaverse might need to use more sensors around homes and offices” (Media article #57). Here, the metaverse is regarded as an autonomous agent, which, to function, needs to assert a strong presence in our everyday lives through surveillance technologies. From this perspective, humans are reduced to helpless creatures before the forces of technological development. This connects with the notion of consumers experiencing a loss of agency before manipulative technological systems (e.g. Darmody and Zwick Citation2020; Zuboff Citation2019). However, within some imaginaries, it is not just technologies but also the various commercial players utilising them that are responsible for creating these feelings of helplessness, as can be seen in the following:

Basically, we have at least four of the richest and most powerful companies in the history of the world racing to build a massive commercial surveillance-and-behavioral-control system they are sure we would not want to live beyond or without. They are boys selling toys, but they are deadly serious. And we just might fall for all of this, just as we have fallen for everything else they have sold us. (Media article #6)

The assertion here is that, as has happened before, regarding the metaverse consumers will be swayed by whatever the tech giants promote and will believe the stories they are told without questioning or resistance. The helplessness that follows has been explored in previous research through concepts such as digital resignation (Draper and Turow Citation2019) and high-fidelity consumption (Hoang, Cronin, and Skandalis Citation2022). Digital resignation refers to the feeling of being unable to contest surveillance practices (Draper and Turow Citation2019). Accordingly, Hoang, Cronin, and Skandalis (Citation2022) assert that, within the surveillance ecosystem, consumers’ subjectivity can be explored through fidelity rather than through autonomy and a sense of agency. By experiencing various collective affects, consumers feel locked into the existing status quo, which leads them to conform to its processes.

The rhetoric depicting individuals as helpless in the face of new technologies and the market perpetuates digital resignation and discourages both individuals and society from advocating for change. By painting technological development and surveillance as inevitable, these imaginaries construct the idea that the future is set to unravel in a certain way, which makes the imagining of other alternatives difficult, if not impossible (Dencik Citation2018).

Technological solutionism

Even when positing the materialisation of the metaverse as inevitable, in many media stories, the overarching tone is in fact very hopeful – the metaverse is regarded as a second chance to get things right and correct the mistakes of the current internet world in terms of data and privacy. This portrayal of the metaverse does not present it as the origin of more oppressive data practices; rather, it is depicted as a more optimistic alternative. For example:

Despite these challenges, or perhaps as a result of them, the metaverse presents an opportunity to be a breakthrough in privacy-compliant digital marketing. (Media article #38)

If we move fast, ask the right questions, and approach Web3 in an ethical way, we have a shot at making this next era of the Internet more compelling, and kinder than the last. (Media article #19)

This seemingly hopeful and optimistic rhetoric against the current data regime may, in effect, further support the surveillance ideology. By focusing on the future, this implies that nothing can be done in the present day – the game is already over, and the only way we can win the battle against invasive surveillance is to completely redo everything by “moving fast”. As there are no guarantees or consensus on when and how the metaverse will materialise, focusing efforts on imagining a better tomorrow may end up creating a passive and hopeless environment in the here and now.

Additionally, what becomes explicit is that these positive and hopeful futures can be achieved through technological development. This resonates with the ideas of technological solutionism (Morozov Citation2013), within which technological fixes are regarded as the simple and optimal means for solving complex societal issues. Technological solutionism constitutes a second rhetoric within metaverse imaginaries providing support for the surveillance ideology. It is intertwined with the rhetoric of inevitability, as the opportunities these technologies present are framed rather dramatically. For instance, in the quotes above, the metaverse creates a “breakthrough” and the “next era of the internet”, thus offering a revolutionary change that can lead to better data practices. This breakthrough is realised through technological solutions, such as blockchain and its associated NFTs, context-based targeting, VPNs and homomorphic encryption.

Overall, in the imaginaries, the focus on technology as a solution takes dominance over the larger discussion about what type of surveillance is acceptable and how it could be tackled through other means. In fact, the importance of regulation and legislation is also explored in the media stories, but mainly from a pessimistic perspective. The presumption is that creating regulation will be near impossible, and even though the metaverse is currently only just an idea, when it comes to regulation, we are already too late to really have a significant impact. This is illustrated in the following quote: “There is a scant chance that we are going to muster the political will to predict and limit the unintended consequences and concentration of power that are sure to follow this race to build and dominate a metaverse” (Media article #6).

Within the techno-solutionist rhetoric, there is believed to be a “scant chance” that any working regulation could be achieved; thus, technology itself must offer the solution. This rhetoric becomes explicit in the way privacy is discussed. Various technological tools are cast as a remedy for the intensified surveillance of the metaverse. Hence, within the imaginaries relying on the rhetoric of techno-solutionism, surveillance is not questioned per se; instead, the focus is on creating hype around new innovative devices and software that individuals can use to protect themselves. As noted by Dencik (Citation2018), viewing privacy protection through a technological response foists the responsibility of resisting data collection on the individual. What this means is that, in addition to promoting technology as the solution for privacy issues above any other measures, the idea of privacy as merely the protection against individual harm is maintained – thus ignoring the larger societal issues. This two-fold point is clearly demonstrated in the following quotes:

Similarly, individual users becoming a part of the metaverse should remain vigilant of the amount and type of information they share. Moreover, it is crucial that they deploy the use of online security tools that are designed to protect users from privacy invasions and data breaches. (Media article #27)

MetaGuard (…) aims to provide a tool for users to protect against intrusions on their personal data. It would allow users to add a variable level of “noise” to the information metaverse applications collect, making it harder to obtain accurate biometric data and make statistical inferences about identity. (Media article #75)

By utilising various “online security tools", “individual users" are expected to “remain vigilant” and on top of the amount and type of data they share. Here, privacy is viewed merely through privacy invasions and data breaches, an idea which neglects the harm that goes beyond these. Although these technologies are portrayed as positive progress, developing new tools for users to protect themselves against surveillance does not challenge the data regime but instead accepts and supports it.

The ways in which individuals can protect themselves with the help of technologies are crystallised through the concepts of control and consent. The techno-solutionist rhetoric assumes that novel technological solutions allow for increased control and more informed consent. The connection between control and privacy from the individual perspective has also been previously discussed. For example, Freishtat and Sandlin (Citation2010, 516) examine how, within its own discourse, Facebook “reconceptualizes privacy within a rhetoric of control”. Indeed, the metaverse is depicted as a goldmine for new technological remedies that can transfer control back to the hands of the individual. For example, blockchain technologies create new ways for individuals to control what and how data are being collected, and they offer better protection against companies’ privacy-breaching actions:

Companies like Microsoft are working on standards now. The concept of decentralized identity has been raised, or self-sovereign identity, where people control their identity through blockchain technology and limit what they disclose to whom, in what level of detail. That would take the control away from platforms and keep consumers safer. (Media article #26)

Here, control is seen as achieved through a “decentralized” or “self-sovereign identity”, allowing consumers to gain an upper hand against platform providers. However, protection through these technological means depends on individuals’ skills and abilities to engage in these privacy-enhancing practices (Dencik Citation2018; Lehtiniemi Citation2017). In the metaverse, these security strategies will likely grow increasingly intricate, and it cannot be presumed that everyone can grasp or have an inclination to, for instance, incorporate blockchain into their daily routines. Additionally, as pointed out above, even if under the control of individuals, the metaverse would likely lead to an intensified level of data harvesting. As control is tied to the use of various technological solutions, were these solutions to fail or not be used correctly, even larger amounts of ever-more intimate data would end up in the hands of third parties (Lehtiniemi Citation2017).

Control is strongly connected to questions of consent. The many issues related to consent as a basis for privacy protection, such as information asymmetries and the fact that not giving consent is not actually possible (Hull Citation2015; Mulligan, Regan, and King Citation2020), have already been discussed in this paper. However, in the data, consent is still regarded as one of the main solutions for privacy dilemmas: “As the metaverse is developed, this same technology that allows for secure NFT wallets could normalize data wallets – such that consumers would be able to fully consent to the bits of data they want to trade for experiences, collectibles or other assets.” (Media article #38). The metaverse is imagined as a place where consumers are – yet again, with the help of novel technologies – able to provide more specific consent on what data can be collected and how it can be used. However, the increased amount of intimate data collected during the immersive metaverse experience, combined with a lack of understanding of how multiple data traces can be conjoined, could lead to consumers having increasing difficulties understanding what they are in fact consenting to. Additionally, affording consumers the opportunity to opt out does not guarantee immunity against subtle influences nudging them into specific directions. Virtual environments can be built in a way that allows for behaviour modification without actually limiting options – and without consumers becoming aware of this (Lehtiniemi Citation2017; Yeung Citation2017).

To conclude, the rhetoric of technological solutionism maintains the view of privacy as individual protection, emphasising that control and consent are achievable through technological means. This focus diverts attention from surveillance at the collective level, which can only be addressed through extensive coordinated efforts that extend beyond technological aids. Therefore, the explicit presence of this rhetoric in metaverse imaginaries highlights the challenge of confronting and resisting surveillance at a fundamental societal level and of envisioning alternatives to it (Dencik Citation2018).

Rhetorics challenging the surveillance ideology

The surveillance ideology is challenged through two main rhetorics. The first is decentralisation. The idea of decentralisation is that power and responsibility become more equally divided, often through technological means. This, in turn, leads to the empowerment of consumers through the monetisation of their own data. The second rhetoric revolves around collaboration between parties and emphasises that tackling the adverse impacts of surveillance is not solely the responsibility of individuals but also, for example, the responsibility of governments, academia and market actors. However, even within these rhetorics, privacy is still largely seen through the individual lens.

Decentralisation

Decentralisation is a frequent theme occurring in the data. The idea behind decentralisation is that, with the help of technological innovation, a new, decentralised internet and metaverse can be created. Decentralisation implies that no single entity is in charge, but instead, the metaverse is created, owned and controlled decentrally, giving individuals power over platform providers. Moreover, this rhetoric builds on the idea that the decentralised creation of the metaverse would foster increased competition in the market, potentially compelling companies to prioritise privacy-centric practices. Thus, the rhetoric of decentralisation provides a hopeful reimagination of the online landscape.

One could argue that decentralisation is just another version of techno-solutionism (Morozov Citation2013), which, by focusing on a narrow technological solution against surveillance, does not actually question it on a fundamental level. However, it could also be regarded as a step towards an alternative vision, in which individuals are freed from the oppressive practices of the current technology giants that maintain the systems of surveillance. The metaverse is seen as the new era of the online world, created by the people for the people. This comes with the assumption that, if individuals are in charge, invasive surveillance will not take place. The idea of decentralisation is crystallised in the following quotes:

Basically, web2 is surfing on the web. It uses centralized systems like a top-level domain that is controlled by a company. There is no privacy because to use the products, your data is extracted and sold. Web3 is surfing inside the web. You have nodes that are operated by thousands of people, so it’s more secure and less centralized. Your data is yours. Your privacy is yours. (Media article #94)

On the path to decentralization, blockchains provide authentication and verification of digital transactions, building trust in the process. This will ultimately change the dynamic between brands and consumers. (Media article #93)

Through technologies such as blockchain, the decentralised metaverse is regarded as a means of giving individuals back their data and privacy, the ownership of which has been lost in the current internet environment controlled by large tech corporations. Decentralisation serves to mitigate the power asymmetry between those collecting data and the individuals who are the source of that data. This is assumed to result in a more secure environment that consumers can trust.

Consumer empowerment is inherently embedded within the rhetoric of decentralisation. Compared to the view of privacy as merely reactive protection against surveillance practices, here, it is regarded more proactively as a source of empowerment. One significant route to achieving this empowerment is by enabling consumers to monetise their own data. By shifting who controls data, money and power would be divided more equally in society, as companies would not be the only ones to monetarily benefit from consumers’ data. This idea is illustrated in the following quotes:

Privacy, identity, preferences, history, insights, and access become currencies. (Media article #93)

Data is money, and money is power, so finding ways to give people ownership of their own data could be one of the great equalizers of the twenty-first century. (Media article #83)

Consumer empowerment through decentralisation is depicted as having the potential to be “one of the great equalizers of the twenty-first century”. In this way, the rhetoric challenges the oppressive data structures by diminishing the power of tech giants, thus resulting in a more equal society in general. However, imaginaries that rely on this rhetoric maintain a rather narrow view of privacy through individual decision-making and commodity. Furthermore, this view ignores the fact that data monetisation can, in effect, create more inequality, as discussed by Lehtiniemi and Ruckenstein (Citation2019). For instance, only the wealthy with technical capabilities and financial literacy are able to protect their data, while others are forced to trade their data for access to services.

Overall, the assumptions underlying the decentralisation rhetoric fail to consider some significant aspects. First, while this internet created by the people for the people and its empowered consumers are hyped, minimal contemplation is given to the practical governance of these virtual realms. Even if the big corporate players were not in charge, this would not necessarily mean that users and their privacy would be safe or that no surveillance would occur. As Lyon (Citation2019) highlights, surveillance can be understood through a culture of surveillance, within which individuals are active participants and themselves engage in the surveillance of others. Second, within this rhetoric, surveillance systems are portrayed as created and kept up only by tech giants, such as Meta. However, what is not discussed is that, even though companies such as Meta play an important role, they are not the only ones collecting and using consumer data. Thus, the abolishment of the current big tech companies would not necessarily go hand in hand with the end of surveillance.

In short, similar to what Lehtiniemi (Citation2017) has found in relation to personal data spaces, the commercial use and monetisation of data, which forms the core of surveillance capitalism, is not fully disputed within the metaverse imaginaries, even if they discuss topics such as decentralisation and consumer empowerment. This is because the imaginaries take as a given the fact that, in the metaverse, large amounts of personal data will be harvested and that this harvesting will only intensify. Hence, following the thinking of Zuboff (Citation2019), human experience is still treated as raw material for prediction products.

Collaboration

The second rhetoric through which the surveillance ideology is challenged is collaboration. While most of the conversation within the metaverse imaginaries centres on technology and is sceptical of the feasibility or effectiveness of regulation, there are still hints of an alternative mindset emerging. For example, some point out that, to find working solutions against increased surveillance in the metaverse, we need cooperation between several parties. Within the rhetoric of collaboration, solving issues related to privacy is not just the responsibility of the individual, and technology is not rendered as the only solution. Examples of this can be seen in the following:

Importantly, government, academia, industry, and civil society decision-makers all must collaborate and deliberate on the various issues concerned with security in this emerging technology. (Media article #32)

But I think we have a duty to sort of resist it [the metaverse] and move toward the more privacy preserving technologies that are going to allow people to talk without worries of self-censorship or censorship from the Big Tech companies coming down upon them. (Media article #9)

Here, the various entities that are needed to create change are acknowledged. Thus, this rhetoric recognises that surveillance is built into the structures of society, and therefore, to challenge surveillance practices, larger coordinated actions are needed. Through this collaboration, a collective-level resistance will take shape, attacking the current power imbalances. As these quotes illustrate, humans are portrayed as active actors in contrast to technology’s agency, for example, through the use of the pronoun “we”.

Indeed, collaboration between actors and across sectors for combining “economic, social, cultural, ecological and technological dimensions” (Dencik Citation2018, 40) is needed to tackle the societal-level issues related to surveillance (e.g. Dencik Citation2018; Mulligan, Regan, and King Citation2020). These subtle traces of divergent thinking lay the foundation for counter-imaginaries, within which an alternative to the metaverse and intensified surveillance is seen as a real possibility. The prevailing idea that we will just accept the coming of the metaverse as it is and focus our efforts solely on finding ways to cope with its negative consequences is challenged. Thus, the inevitability of the metaverse and surveillance is questioned.

Collaboration also encompasses involvement from companies. The imaginaries relying on the collaboration rhetoric highlight the role that companies play in creating alternatives for the current data regime. Brands entering the metaverse are expected to take a more active role against the oppressive practices of the biggest corporations, establish strict policies on how they use the collected data and, in this way, build trust with their customers. One way in which this can be done is by including privacy in their products from the start:

Safety-by-design, fairness-by-design, and privacy-by-design are essential to digital sustainability. Platforms need to deploy up-to-date technology that proactively enforces safe and welcoming behavior outlined in the platform’s community guidelines. (Media article #24)

Here, companies are entrusted with more responsibility to create safer environments. Building products and services with privacy at their core requires businesses to take a more active part and more proactive approach to privacy and surveillance. If products are built on a privacy-by-design approach, with default settings ensuring a more secure virtual environment, this lightens the burden on individuals to have the skills and knowledge to protect themselves. The responsibility for safeguarding their privacy does not rest on their shoulders; rather, it is managed by the service providers.

However, even within this rhetoric, which attempts to challenge the prevailing data systems by advocating for increased collaboration and greater corporate responsibility, privacy is still largely understood as linked to the individual. Although the imaginaries build an understanding that surveillance in the metaverse widely causes threats to society and, thus, various parties need to come together to create resistance and build alternatives, when discussing privacy, they still entrust consumers to be ultimately responsible for the change. The following quote illustrates this:

Consequently, the question will be whether national regulators and governments are well equipped and prepared to deal with the above mentioned concerns (…) But more importantly, the question begs as to how data subjects i.e. individual users of Meta will demand their privacy and personal information to be protected. (Media article #34)

In addition to acknowledging regulators and governments, what is seen as most important is that data subjects “demand for their privacy to be protected”. To sum up, both the rhetorics of decentralisation and collaboration build on the idea of dividing power more equally and activating various parties within society to take part in the battle against amplified surveillance and the erosion of privacy. In this way, a future with fairer data practices can be imagined. Nevertheless, despite efforts to contest surveillance structures, the rhetorics often succumb to familiar pitfalls by approaching privacy primarily from an individual standpoint and by emphasising specific technologies, such as blockchain, as the principal tools against surveillance.

Discussion

Sociocultural understandings of privacy

Through the construction of imaginaries, the future becomes an important factor in the present day impacting actors on both the micro and macro levels (Borup et al. Citation2006). In this study, we explored how metaverse imaginaries are constructed within media content by focusing on how various rhetorics amplify or contest the surveillance ideology and, at the same time, constitute understandings of privacy.

By circulating in the media, metaverse imaginaries take part in co-constructing reality and impact individual consumers. The cultural-level imaginaries regarding surveillance become transformed into personal imaginaries, as consumers envision their place and role as part of the surveillance culture (Lyon Citation2019). These then frame their expectations towards the metaverse, technology and privacy, as well as towards the practices in which they engage. Hence, theoretically, this study extends consumer privacy research by approaching the concept from a sociocultural perspective and, thus, sheds light on the ways in which privacy is given meaning on the sociocultural level as part of the larger surveillance ideology. Recent scholarship (Horppu Citation2023; Sörum and Fuentes Citation2023) has noted that research on consumers’ experiences of datafication has largely focused on individual psychological processes and cost-benefit calculations. This research tends to overlook the sociocultural meanings, narratives and imaginations that shape the realities in which the interactions with data collection practices take place. If we, for instance, look at the privacy paradox – the gap between attitude and behaviour in the context of privacy (Norberg, Horne, and Horne Citation2007) – both attitude and behaviour are impacted by individuals’ conceptions of what is possible, which, in turn, stem from the cultural-level imaginaries and meanings regarding surveillance and privacy.

Our findings reveal a tendency to discuss privacy from the perspective of the individual advocating for its safeguarding through protective technologies. Here, our findings align with previous research highlighting a prevalent inclination to address privacy concerns primarily through individual protection, as observed in Facebook’s promotional content (Egliston and Carter Citation2022) and the elite discourse surrounding Meta’s metaverse (Lucia, Vetter, and Adubofour Citation2023). Marketing scholarship’s emphasis on drawing from psychology to study consumer privacy through concepts such as the privacy calculus or the privacy paradox (Martin and Murphy Citation2017; Smith, Dinev, and Xu Citation2011) further strengthens the individualistic perspective on the construct. The mere concept of consumer privacy points to privacy as an individual, commodified good instead of a collective one. To draw more attention to the collective level of privacy, this paper adds to the scholarship on group privacy (L. Taylor, Floridi, and van der Sloot Citation2017) and its social value (Regan Citation1995; Solove Citation2007) by highlighting how discourse that approaches privacy through self-management can be problematic in society at large.

Reducing privacy to an individualised condition can be traced back to the larger neoliberal ideology, in which governmental regulation on businesses is viewed unfavourably, while corporate self-governance is upheld as the ideal (Zuboff Citation2019). Within this market system of few limitations, consumers are expected to exercise their freedom of choice, and to wield significant influence over market operations through their decisions (Carrington, Zwick, and Neville Citation2016; Dixon Citation1992). Drawing from this ideology, privacy self-management works as another example of consumer responsibilisation (Giesler and Veresiu Citation2014), which highlights sovereign consumers who have the ability and obligation to initiate change and thus lead the way for other market actors to follow (Carrington, Zwick, and Neville Citation2016).

Our findings show that, despite emphasising collaboration among parties, the burden of addressing privacy issues ultimately falls on the individual, thereby diminishing the role of market actors. Through their own actions, such as utilising tools to encrypt or obfuscate their data, consumers are able to make changes within the system of surveillance but not change the system itself, as this would require wider challenging of the existing political and economic structures (Carrington, Zwick, and Neville Citation2016). As has been shown in the context of anti-consumption and digital detoxing, these actions can be regarded as gestural acts, which relieve individuals’ own agonies rather than genuinely resist or transform the system (Hoang, Cronin, and Skandalis Citation2023). Thus, the idea of privacy as the responsibility of the individual suppresses a more systematic critique of surveillance capitalism.

This is linked to a lack of collective imagination regarding alternatives to the current surveillance ecosystem. Even the imaginaries that depict the metaverse as the next revolutionary step towards more privacy and consumer empowerment often struggle to relinquish certain assumptions, such as the belief that technology provides the most effective solution against surveillance-related harms. Hence, envisioning happens within a set system, the terms of which have been set by the market (Preece, Whittaker, and Janes Citation2022) and to which even our hopes of a better future are formatted to fit. Indeed, marketers aspire to construct and sustain a vision in which surveillance, coupled with sophisticated automated decision-making and nudging techniques, culminates in consumer autonomy and empowerment (Darmody and Zwick Citation2020).

In this way, our findings connect with the notion of surveillance realism (Dencik Citation2018), which draws from Fisher’s (Citation2009) capitalist realism and refers to the difficulty of expanding the limits of our imagination to envision alternatives outside the current datafication paradigm. As it currently stands, we, as a society, are unable to imagine an alternative to a virtual future within which individuals’ data are extracted in an intensifying manner for commercial purposes. Within the prevailing neoliberal environment, we find ourselves in a state of futurelessness, feeling closed in within the system, with affective atmospheres built around pessimistic views of the future (Ahlberg, Hietanen, and Soila Citation2021; Hoang, Cronin, and Skandalis Citation2022; Citation2023).

On the individual level, this inability to imagine alternative futures, coupled with a sense of being closed in, can further strengthen feelings of helplessness (Andrejevic Citation2014) and result in digital resignation (Draper and Turow Citation2019). Hence, consumers might be aware of the issues of the surveillance systems, but at the same time, they are aware of their own powerlessness regarding these issues (Fisher Citation2009; Hoang, Cronin, and Skandalis Citation2023). Moreover, even if the future of privacy is discussed in a more hopeful and positive light, much responsibility is still placed on consumers to stay on top of the newest technologies. If this technological savviness is missing, the visions might further feed into feelings of helplessness. Therefore, it is not as straightforward as saying that individuals are merely indifferent about surveillance practices. Instead, the normalisation of these practices through the public narrative and everyday affective experiences, combined with their complexity and ambiguity, makes them difficult for individuals to process. As a result, consumers might feel that it is easier to not consciously think about the ways in which these systems operate and instead find ways to suppress the affects that might emerge (Ellis Citation2020).

Reimagining the inevitable

Looking beyond the individual consumer, the ways in which these imaginaries are constructed have an impact on how, on the societal level, the metaverse or other future technologies will be welcomed. The imaginaries can hinder the gathering of political will to rethink the mechanisms of data capitalism. However, as imaginaries are culturally and collectively constructed, they can also be challenged and reconstructed (Fisher Citation2009). This further connects to the culture of surveillance, in which consumers are regarded not as mere passive subjects but as active agents in shaping and maintaining culture (Lyon Citation2019). Surveillance culture emerges through the practices and imaginaries of individuals, which suggests that they also have the power to challenge it. The first step is rethinking the inevitability narrative. Drawing from Latour (Citation2020), Mulligan, Regan, and King (Citation2020) point out how the COVID-19 pandemic showed that even rather quick and drastic change is possible, meaning that digital infrastructures, systems and markets are more flexible than they are made out to be. Thus, the inevitability of the metaverse and its utilisation for more surveillance could be reimagined.

In fact, since the data for this paper were collected, the inevitability of the metaverse has come under scrutiny. For instance, Meta has faced big issues in developing the needed technology and attracting users for its metaverse, which has resulted in extensive loss in investment. As a result, in public discussion, the most intense hype has waned, giving rise to more apprehensive visions regarding the future of the metaverse. However, at the moment, the Apple Vision Pro VR headset is the topic attracting attention in both traditional and social media. As such, the next big thing in technology is always going to emerge, but the question remains whether it is accompanied by the same assumptions regarding surveillance and privacy.

By reconceptualising the core assumptions of the data paradigm, we can begin to perceive privacy from a more collective perspective and acknowledge the detrimental effects that surveillance can have on society’s fundamental values, including fairness, security and autonomy. This could be achieved by combining the agendas of those interested in issues of technology and those interested in social justice, which have so far been separate (Dencik, Hintz, and Cable Citation2016). Through collaboration between various parties, a reimagination of data practices is possible by using human rights and economic and social justice as a starting point (Dencik Citation2018; Mulligan, Regan, and King Citation2020). By building a new system from the ground up for a greater division of power, consumer control and empowerment can start to have real meaning. Thus, questions of individual privacy protection are not completely obsolete; they could just be attributed a new role as part of a bigger paradigm shift.

Conclusion

This paper contributes to privacy research, first, by shedding light on how privacy is given meaning on the sociocultural level. Consumers’ privacy attitudes, beliefs and behaviours are not born in a vacuum but are instead derived from the prevailing social imaginaries and cultural meanings. By using the metaverse as an illustrative example, this study unravels how these imaginaries are constructed in media through the rhetorics of inevitability, technological solutionism, decentralisation and collaboration. Second, this paper adds to the stream of literature that problematises the focus on privacy protection through and as the responsibility of the individual. We further argue for the need to extend the perspective to the larger societal harms of surveillance, which can only be tackled through wider collaboration between parties, not through protective technologies. Third, and connected to the previous points, this paper shows how understandings of privacy are connected to the larger ideology of surveillance. The relationship is reciprocal, as the way the discussion around surveillance is constructed sustains the narrow, individual view of privacy and, at the same time, this view further supports the surveillance ideology.

While this study focused on the ways in which social imaginaries are constructed in the media, future research could explore how the metaverse is discussed elsewhere, such as in more official policy documents. The data for this study were collected following Meta’s metaverse announcement, during a time when there was much buzz around the concept. It would be intriguing to explore how discussions surrounding the metaverse have evolved with the decline of the most intense hype, particularly in sources less susceptible to initially engaging in hype cycles. Are they able to have a more critical outlook regarding surveillance systems, and are privacy harms approached from a more collective and societal perspective? Indeed, the rather limited focus on news media and specific geographical areas can be regarded as the main limitation of this study. In our analysis, we have not taken into account, for example, the political partisanship of certain (especially U.S.) media outlets, as this was not the focus of our study. More varied data or an exploration of the construction of metaverse imaginaries in other cultural contexts could generate intriguing insights.

In addition, more research still needs to be conducted to understand how, based on these cultural-level visions, consumers make sense of the metaverse, surveillance and privacy within their daily lives. Although inquiries into the datafied consumer experiences from an emic perspective have started to emerge (e.g. Lupton Citation2020; Sörum and Fuentes Citation2023), further research is still needed to understand privacy from the consumer’s perspective in the constantly connected realities of both the present and the future.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Johanna Horppu

Johanna Horppu is a doctoral researcher at Tampere University, Finland. Her research explores the sociocultural and sociomaterial dimensions of privacy and consumer data with a focus on the interconnections between consumers, technology, imaginaries and affects.

Elina Närvänen

Elina Närvänen is a professor of services and retailing at Tampere University, Finland. Her current research interests include sustainable consumption and the circular economy, sociomaterial approaches to consumption and qualitative research methodologies.

References

  • Ahlberg, O., J. Hietanen, and T. Soila. 2021. “The Haunting Specter of Retro Consumption.” Marketing Theory 21 (2): 157–175. https://doi.org/10.1177/1470593120966700
  • Altheide, D., and C. Schneider. 2013. Qualitative Media Analysis. City Road, London: SAGE Publications, Ltd.
  • Andrejevic, M. 2014. “Big Data, Big Questions: The Big Data Divide.” International Journal of Communication 8: 1673–1689.
  • Andrejevic, M. 2017. “Digital Citizenship and Surveillance: To Pre-Empt a Thief.” International Journal of Communication 11: 879–896.
  • Barocas, S., and K. Levy. 2019. “Privacy Dependencies.” Washington Law Review 95 (2): 555–616.
  • Barrera, K. G., and D. Shah. 2023. “Marketing in the Metaverse: Conceptual Understanding, Framework, and Research Agenda.” Journal of Business Research 155: 113420. https://doi.org/10.1016/j.jbusres.2022.113420
  • Barth, S., M. D. de Jong, M. Junger, P. H. Hartel, and J. C. Roppelt. 2019. “Putting The Privacy Paradox to the Test: Online Privacy and Security Behaviors among Users with Technical Knowledge, Privacy Awareness, and Financial Resources.” Telematics and Informatics 41: 55–69. https://doi.org/10.1016/j.tele.2019.03.003
  • Beer, D. 2019. The Data Gaze: Capitalism, Power and Perception. London: SAGE Publications Ltd.
  • Beke, F. T., F. Eggers, P. C. Verhoef, and J. E. Wieringa. 2022. “Consumers’ Privacy Calculus: The PRICAL Index Development and Validation.” International Journal of Research in Marketing 39 (1): 20–41. https://doi.org/10.1016/j.ijresmar.2021.05.005
  • Belk, R., H. Weijo, and R. V. Kozinets. 2021. “Enchantment and Perpetual Desire: Theorizing Disenchanted Enchantment and Technology Adoption.” Marketing Theory 21 (1): 25–52. https://doi.org/10.1177/1470593120961461
  • Bettany, S. M., and B. Kerrane. 2016. “The Socio-Materiality of Parental Style: Negotiating the Multiple Affordances of Parenting and Child Welfare within the New Child Surveillance Technology Market.” European Journal of Marketing 50 (11): 2041–2066. https://doi.org/10.1108/EJM-07-2015-0437
  • Bleier, A., and M. Eisenbeiss. 2015. “The Importance of Trust for Personalized Online Advertising.” Journal of Retailing 91 (3): 390–409. https://doi.org/10.1016/j.jretai.2015.04.001
  • Borup, M., N. Brown, K. Konrad, and H. van Lente. 2006. “The Sociology of Expectations in Science and Technology.” Technology Analysis & Strategic Management 18 (3–4): 285–298. https://doi.org/10.1080/09537320600777002
  • Carmi, E. 2021. “A Feminist Critique to Digital Consent.” Seminar.Net 17 (2). https://doi.org/10.7577/seminar.4291
  • Carrington, M. J., D. Zwick, and B. Neville. 2016. “The Ideology of the Ethical Consumption gap.” Marketing Theory 16 (1): 21–38. https://doi.org/10.1177/1470593115595674
  • Cloarec, J. 2020. “The Personalization–Privacy Paradox in the Attention Economy.” Technological Forecasting and Social Change 161: 120299. https://doi.org/10.1016/j.techfore.2020.120299
  • Cohen, J. 2013. “What Privacy Is For.” Harward Law Review 126 (7): 1904–1933.
  • Coll, S. 2014. “Power, Knowledge, and the Subjects of Privacy: Understanding Privacy as the Ally of Surveillance.” Information, Communication & Society 17 (10): 1250–1263. https://doi.org/10.1080/1369118X.2014.918636
  • Culnan, M. J., and P. K. Armstrong. 1999. “Information Privacy Concerns, Procedural Fairness and Impersonal Trust: An Empirical Investigation.” Organization Science 10 (1): 104–115. https://doi.org/10.1287/orsc.10.1.104
  • Darmody, A., and D. Zwick. 2020. “Manipulate to Empower: Hyper-Relevance and the Contradictions of Marketing in the Age of Surveillance Capitalism.” Big Data & Society 7 (1): 205395172090411. https://doi.org/10.1177/2053951720904112
  • Dencik, L. 2018. “Surveillance Realism and the Politics of Imagination: Is There No Alternative?” Krisis: Journal for Contemporary Philosophy 1: 31–43. https://doi.org/10.21827/krisis.38.1.38829
  • Dencik, L., A. Hintz, and J. Cable. 2016. “Towards Data Justice? The Ambiguity of Anti-Surveillance Resistance in Political Activism.” Big Data & Society 3 (2): 205395171667967. https://doi.org/10.1177/2053951716679678.
  • Dixon, D. F. 1992. “Consumer Sovereignty, Democracy, and the Marketing Concept: A Macromarketing Perspective.” Canadian Journal of Administrative Sciences 9 (2): 116–125. https://doi.org/10.1111/j.1936-4490.1992.tb00585.x
  • Draper, N. A., and J. Turow. 2019. “The Corporate Cultivation of Digital Resignation.” New Media & Society 21 (8): 1824–1839. https://doi.org/10.1177/1461444819833331
  • Dwivedi, Y. K., L. Hughes, Y. Wang, A. A. Alalwan, S. J. Ahn, J. Balakrishnan, S. Barta, and J. Wirtz. 2023. “Metaverse Marketing: How the Metaverse Will Shape the Future of Consumer Research and Practice.” Psychology & Marketing 40 (4): 750–776. https://doi.org/10.1002/mar.21767
  • Egliston, B., and M. Carter. 2022. “Oculus Imaginaries: The Promises and Perils of Facebook’s Virtual Reality.” New Media & Society 24 (1): 70–89. https://doi.org/10.1177/1461444820960411
  • Ellis, D. 2020. “Techno-Securitisation of Everyday Life and Cultures of Surveillance-Apatheia.” Science as Culture 29 (1): 11–29. https://doi.org/10.1080/09505431.2018.1561660
  • Fisher, M. 2009. Capitalist Realism: Is There No Alternative? Hants, UK: Zero Books.
  • Floridi, L. 2017. “Group Privacy: A Defence and Interpretation.” In Group Privacy: New Challenges of Data Technologies, edited by L. Taylor, L. Floridi, and B. van der Sloot, 103–123. Dordrecht: Springer.
  • Fourcade, M., and K. Healy. 2017. “Seeing Like a Market.” Socio-economic Review 15 (1): 9–29. https://doi.org/10.1093/ser/mww033
  • Freishtat, R. L., and J. A. Sandlin. 2010. “Shaping Youth Discourse About Technology: Technological Colonization, Manifest Destiny, and the Frontier Myth in Facebook’s Public Pedagogy.” Educational Studies 46 (5): 503–523. https://doi.org/10.1080/00131946.2010.510408
  • Giesler, M., and E. Veresiu. 2014. “Creating the Responsible Consumer: Moralistic Governance Regimes and Consumer Subjectivity.” Journal of Consumer Research 41 (3): 840–857. https://doi.org/10.1086/677842
  • Hoang, Q., J. Cronin, and A. Skandalis. 2022. “High-fidelity Consumption and the Claustropolitan Structure of Feeling.” Marketing Theory 22 (1): 85–104. https://doi.org/10.1177/14705931211062637
  • Hoang, Q., J. Cronin, and A. Skandalis. 2023. “Futureless Vicissitudes: Gestural Anti-consumption and the Reflexively Impotent (Anti-)consumer.” Marketing Theory 23 (4): 585–606. https://doi.org/10.1177/14705931231153193
  • Horppu, J. 2023. “Sensing Privacy: Extending Consumer Privacy Research Through a Consumer Culture Theory Approach.” Marketing Theory 23 (4): 661–684. https://doi.org/10.1177/14705931231175698
  • Hull, G. 2015. “Successful Failure: What Foucault Can Teach Us about Privacy Self-Management in a World of Facebook and Big Data.” Ethics and Information Technology 17 (2): 89–101. https://doi.org/10.1007/s10676-015-9363-z
  • Jasanoff, S. 2015. “Future Imperfect: Science, Technology and the Imaginations of Modernity.” In Dreamscapes of Modernity. Sociotechnical Imaginaries and the Fabrication of Power, edited by S. Jasanoff, and K. Sang-Hyun, 1–33. Chicago/London: University of Chicago Press.
  • Latour, B. 2020. ‘What Protective Measures Can You Think of so We Don’t Go Back to the Pre-Crisis Production Model’. Accessed 2 September 2023. http://www.bruno-latour.fr/node/853.html.
  • Lehtiniemi, T. 2017. “Personal Data Spaces: An Intervention in Surveillance Capitalism?” Surveillance & Society 15 (5): 626–639. https://doi.org/10.24908/ss.v15i5.6424
  • Lehtiniemi, T., and M. Ruckenstein. 2019. “The Social Imaginaries of Data Activism.” Big Data & Society 6 (1): 205395171882114. https://doi.org/10.1177/2053951718821146
  • Lucia, B., M. A. Vetter, and I. K. Adubofour. 2023. “Behold the Metaverse: Facebook’s Meta Imaginary and the Circulation of Elite Discourse.” New Media & Society, 14614448231184248. https://doi.org/10.1177/14614448231184249.
  • Lupton, D. 2020. “”Not the Real Me”: Social Imaginaries of Personal Data Profiling.” Cultural Sociology 15 (1): 3–21. https://doi.org/10.1177/1749975520939779
  • Lyon, D. 2019. “Surveillance Capitalism, Surveillance Culture and Data Politics.” In Data Politics: Worlds, Subjects, Rights, edited by D. Bigo, E. Isin, and E. Ruppert, 64–77. New York: Taylor & Francis.
  • Mager, A., and C. Katzenbach. 2021. “Future Imaginaries in the Making and Governing of Digital Technology: Multiple, Contested, Commodified.” New Media & Society 23 (2): 223–236. https://doi.org/10.1177/1461444820929321
  • Martin, K. D., and P. E. Murphy. 2017. “The Role of Data Privacy in Marketing.” Journal of the Academy of Marketing Science 45 (2): 135–155. https://doi.org/10.1007/s11747-016-0495-4
  • Mayer-Schönberger, V., and K. Cukier. 2013. Big Data: A Revolution that Will Transform How We Live, Work, and Think. London: John Murray.
  • McKinsey & Company. 2022. “Marketing in the Metaverse: An Opportunity for Innovation and Experimentation.” 24 May 2022. Accessed December 3, 2022. https://www.mckinsey.com/business-functions/growth-marketing-and-sales/our-insights/marketing-in-the-metaverse-an-opportunity-for-innovation-and-experimentation.
  • Morozov, E. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs.
  • Mulligan, D. K., P. M. Regan, and J. King. 2020. “The Fertile Dark Matter of Privacy Takes on the Dark Patterns of Surveillance.” Journal of Consumer Psychology 30 (4): 767–773. https://doi.org/10.1002/jcpy.1190
  • Norberg, P. A., D. R. Horne, and D. A. Horne. 2007. “The Privacy Paradox: Personal Information Disclosure Intentions Versus Behaviors.” Journal of Consumer Affairs 41 (1): 100–126. https://doi.org/10.1111/j.1745-6606.2006.00070.x
  • Obar, J. A., and A. Oeldorf-Hirsch. 2018. “The Clickwrap: A Political Economic Mechanism for Manufacturing Consent on Social Media.” Social Media+ Society 4 (3): 2056305118784770. https://doi.org/10.1177/2056305118784770
  • Preece, C., L. Whittaker, and S. Janes. 2022. “Choose Your Own Future: The Sociotechnical Imaginaries of Virtual Reality.” Journal of Marketing Management 38 (15–16): 1777–1795. https://doi.org/10.1080/0267257X.2022.2112610
  • Regan, P. M. 1995. Legislating Privacy: Technology, Social Values, and Public Policy. Chapel Hill: University of North Carolina Press.
  • Roessler, B., and J. DeCew. 2023. “Privacy.” In The Stanford Encyclopedia of Philosophy (Winter 2023 Edition), edited by Edward N. Zalta, and Uri Nodelman. https://plato.stanford.edu/archives/win2023/entries/privacy/.
  • Sadowski, J. 2019. “When Data is Capital: Datafication, Accumulation, and Extraction.” Big Data & Society 6 (1): 205395171882054-. https://doi.org/10.1177/2053951718820549
  • Scarpi, D., G. Pizzi, and S. Matta. 2022. “Digital Technologies and Privacy: State of the Art and Research Directions.” Psychology & Marketing 39 (9): 1687–1697. https://doi.org/10.1002/mar.21692
  • Smith, J. H., T. Dinev, and H. Xu. 2011. “Information Privacy Research: An Interdisciplinary Review.” MIS Quarterly 35 (4): 989–1015. https://doi.org/10.2307/41409970
  • Solove, D. J. 2007. “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.” The San Diego Law Review 44 (4): 745–772.
  • Sörum, N., and C. Fuentes. 2023. “How Sociotechnical Imaginaries Shape Consumers’ Experiences of and Responses to Commercial Data Collection Practices.” Consumption Markets & Culture 26 (1): 24–46. https://doi.org/10.1080/10253866.2022.2124977
  • Statista. 2023a. “Share of Internet Users Worldwide Who Would Like To Do More To Protect Their Digital Privacy as of January 2023, by Country” [Graph]. Accessed February 9, 2024. https://www.statista.com/statistics/1122408/internet-users-worldwide-looking-better-ways-protect-privacy/.
  • Statista. 2023b. “Share of Internet Users Who Are Willing to Accept Online Privacy Risks for Convenience as of January 2023, by Country” [Graph]. Accessed February 9, 2024. https://www.statista.com/statistics/1023952/global-privacy-risks-accept-convenience-convenience.
  • Stewart, D. W. 2017. “A Comment on Privacy.” Journal of the Academy of Marketing Science 45 (2): 156–159. https://doi.org/10.1007/s11747-016-0504-7
  • Strycharz, J., and C. M. Segijn. 2022. “The Future of Dataveillance in Advertising Theory and Practice.” Journal of Advertising 51 (5): 574–591. https://doi.org/10.1080/00913367.2022.2109781
  • Taylor, C. 2004. Modern Social Imaginaries. Durham, NC: Duke University Press.
  • Taylor, L., L. Floridi, and B. van der Sloot. 2017. “Introduction: A New Perspective on Privacy.” In Group Privacy: New Challenges of Data Technologies, edited by L. Taylor, L. Floridi, and B. van der Sloot, 10–23. Dordrecht: Springer.
  • van Dijck, J. 2014. “Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology.” Surveillance & Society 12 (2): 197–208. https://doi.org/10.24908/ss.v12i2.4776
  • Vlačić, B., L. Corbo, S. C. e Silva, and M. Dabić. 2021. “The Evolving Role of Artificial Intelligence in Marketing: A Review and Research Agenda.” Journal of Business Research 128: 187–203. https://doi.org/10.1016/j.jbusres.2021.01.055
  • West, S. M. 2019. “Data Capitalism: Redefining the Logics of Surveillance and Privacy.” Business & Society 58 (1): 20–41. https://doi.org/10.1177/0007650317718185
  • Yeung, K. 2017. “‘Hypernudge’: Big Data as a Mode of Regulation by Design.” Information, Communication & Society 20 (1): 118–136. https://doi.org/10.1080/1369118X.2016.1186713
  • Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Public Affairs.

Appendix 1:

Cited media articles.