5,604
Views
4
CrossRef citations to date
0
Altmetric
Research Article

The ethics of self-tracking. A comprehensive review of the literature

ABSTRACT

This paper presents a literature review on the ethics of self-tracking technologies which are utilized by users to monitor parameters related to their activity and bodily parameters. By examining a total of 65 works extracted through a systematic database search and backwards snowballing, the authors of this review discuss three categories of opportunities and ten categories of concerns currently associated with self-tracking. The former include empowerment and well-being, contribution to health goals, and solidarity. The latter are social harms, privacy and surveillance, ownership control and commodification of data, autonomy, data-facilitated harm, datafication and interpretability of data, negative impact on relation to self and others, shortcomings of design, negative impact on health perception, and regulation and enforcement of rules. The review concludes with a critical analysis of the existing literature and an overview of a future research agenda that could complement the current work on ethics of self-tracking.

Introduction

The goal of this article is to provide a comprehensive review of the literature addressing ethical aspects of self-tracking which refers to the collection, representation and analysis of personal data in numerical form with the help of digital devices (a definition we unpack in the next section). Across history, many ways of monitoring and examining the self and individual activity have been developed: diaries and journals, art, poetry, fiction and confessions have all been employed by individuals to gather and present relevant information about themselves in order to examine and possibly change their behavior (Friesen, Citation2017; Heehs, Citation2013). The shared characteristic of the traditional modes of self-monitoring and evaluation is their dominantly qualitative nature: their insights are usually represented in linguistic or visual forms. Conversely, quantified ways of representing and evaluating the self have been largely absent in the past, despite some notable exceptions such as Benjamin Franklin’s virtue tables (Franklin, Citation2005) or Aristotle’s geometrical metaphors illustrating the doctrine of the golden mean (Aristotle, Citation2004).

Table 1. Search strings

Table 2. Number of works extracted for the review.

Table 3. Categories of ethical aspects of self-tracking.

However, in recent years certain technological developments have sparked great interest in quantified ways of self-monitoring and evaluation. The introduction of affordable sensors and the increasing processing power of wearable and personal devices have made the collection of metrics about oneself more feasible. With the rapid growth of the smart and wearable device market, enthusiastic media coverage and community-led efforts, what can be labeled as digital self-tracking of individual activity has become an increasingly popular practice. It merits attention of philosophers as insights gained from the quantification of the self often motivate behavior change and the tracking itself is usually oriented toward the adjustment of patterns of activity (users might track to achieve greater focus, increase the amount of time spent with friends and family, or put more emphasis on health-related activity), which can have significant ethical implications. However, these recent developments in the field of self-tracking are not the only reasons justifying the need for a review of ethical aspects of self-tracking.

First, even though ethical literature is an important tool both for researchers and policymakers, Sofaer and Strech (Citation2012) noted the difficulties in locating relevant work and extracting arguments made by the authors. Consequently, reviews of ethical issues are an indispensable tool for researchers and policymakers wishing to acquire a full picture of the ethical dimension of a given phenomenon. Ethical arguments are not universally accepted and individual ethical approaches are not able to provide a full overview of all issues that may be deemed ethically relevant.Footnote1 A review outlining all of the ethical issues already identified in the literature provides a more accurate overview of the debate and reduces the risk that readers would focus only on the issues found in the papers published in the most reputable journals or by the most renowned researchers (see Mertz et al., Citation2016).

Second, the most recent overview of ethical challenges and opportunities arising with the use of technologies related to those forming the focus of this study was published in 2014 (Jacquemard et al., Citation2014) and the debate on the ethical dimension of self-tracking has expanded significantly in the intervening years. Moreover, the review by Jacquemard et al. dealt primarily with lifelogging and not quantified self-tracking (a difference which will be expanded upon further below). Consequently, it did not touch on many issues we discuss in our article (e.g., the economic-political dimension of self-tracking, impact on autonomy) and the issues found in both reviews (e.g., privacy, impact on social interactions) are discussed in different depth and scope. As Jacquemard et al. deal with an altogether different, even if related practice, we believe that a comprehensive, up-to-date, mapping of the ethical dimension of self-tracking is badly needed. At the time of writing, there are no reviews of ethical aspects of self-tracking.

Conceptual clarifications

Any systematic literature review aimed at providing an overview of ethical aspects of self-tracking will first need to address some conceptual issues arising from the current state of research. It is difficult to determine what exactly self-tracking entails, as well as where it begins and ends. There is no agreement in the existing literature about the scope of self-tracking and the terms that should be used to describe it. Authors discussing the collection of personal data with digital devices use terms including “self-tracking,” “quantified self” (as follows from the name of the Quantified Self movement), “self-quantification,” “activity tracking,” “personal analytics,” “personal informatics” and “lifelogging” (Lupton, Citation2016a; Selke, Citation2016a). For the sake of consistency, we propose to use “self-tracking” as the main term through which we will refer to the collection of quantified personal data with digital devices, occasionally referring to “self-quantification” and “activity tracking” to avoid repetition.

The terminological diversity present in the existing literature hints at definitional problems associated with the current state of research on self-tracking. Current discussions lump together applications of technology as diverse as the tracking of menstrual cycles using analogue technologies, quantification of exercise with the help of wristbands and the laborious attempts to record every aspect of life with the help of wearable cameras and memory storage, as in the case of Gordon Bell (Bell & Gemmell, Citation2009).

To address this obstacle, we define self-tracking as collection, representation and analysis of personal data in numerical form with the help of digital devices. This definition does not include all key characteristics of self-tracking. Instead, it is meant to draw a strong distinction between self-tracking and lifelogging, which we consider to be a distinct phenomenon, even if it also refers to collection of personal data. We believe lifelogging differs from self-tracking both in the type of data collected and the purpose of collection.

First, lifelogging deals mostly with qualitative data and is more closely connected with memory, description of impressions, and retention of images. Conversely, self-tracking focuses predominantly on quantification of activity and bodily parameters (although the quantified data can be also presented in visual form, i.e., as graphs). To provide an example, in our understanding, a person watching their weight would engage in lifelogging if they regularly took photographs of their body, while they would engage in self-tracking if they kept a record of the numerical values representing their weight in, for example, kilograms. It must be noted, however, that the two practices are not mutually exclusive and some self-tracking projects might also benefit from the use of pictures or other qualitative means of retention (e.g., people tracking their weight loss in kilograms and body fat percentage, but also taking pictures representing their progress).Footnote2 Nevertheless, the focus on numbers is a distinct characteristic of self-tracking, as evidenced, for example, by the motto of the Quantified Self movement which gathers the biggest enthusiasts of self-tracking: “self-knowledge through numbers.”

Secondly, there exists a difference in the purpose of the two practices. We consider lifelogging to be concerned with storing personal information mostly for the purposes of retention – the goal of a lifelogger is usually to create an easily accessible and searchable archive of memories, impressions or notes. On the other hand, self-tracking is often meant to be a basis for actionable insights that can be conducive to behavior change (e.g., developing the habit of daily exercise). Of course, this distinction does not provide a razor-sharp divide – for example, there is nothing preventing a lifelogger from using their qualitative data to achieve some pre-established goal, just like a self-tracker could monitor some metrics out of curiosity. However, a clear teleological dimension associated with habit-formation can be observed in self-tracking (Kristensen & Ruckenstein, Citation2018).

Consequently, a definition of self-tracking as “collection, representation and analysis of personal data in numerical form with the help of digital devices” is meant to delineate a distinct and broad field of technologies and practices that can still be studied in a coherent and comprehensive manner. While lifelogging might be similar in some respects to self-tracking, including lifelogging practices and technologies in our review would make it too broad and potentially unmanageable from the practical and conceptual standpoint. After all, there exist many more practices of collecting, representing and analyzing personal information with the help of digital devices that do not depend on quantification: digital journaling, curation of a digital photo archive and even personal blogging. All of these could be a part of a lifelogging project, but they are only remotely related to (our understanding of) self-tracking in virtue of their digital nature and focus on personal information. The distinction between quantitative and qualitative ways of recording information is crucial for the purposes of distinguishing between different practices of collecting personal data and it allows us to focus on self-tracking as a unique phenomenon. The difference in the purpose of data collection (i.e., retention vs behavior change and management) only further justifies the distinction between self-tracking and lifelogging.

Finally, we should note that we understand ethics in a very broad manner as a philosophical discipline dealing with values, norms, rights, duties, goods, and virtues. Consequently, ethical aspects of a given subject matter (in our case self-tracking) are the features that are connected or bear upon any of the above-mentioned phenomena. Typically, ethical aspects are discussed explicitly in professional ethics journals, but other disciplines, such as social science or anthropology, might also deal with them in an implicit way or in non-ethical terms (e.g., as concerns or problems). However, as we adopt a pluralistic view of ethics and do not want to privilege a specific ethical theory, we decided to equally consider all aspects of self-tracking that were labeled by the discussed authors as normatively relevant.

Method

Research question/Objective

Our review aims at providing the answer to the following research question: What ethical aspects of self-tracking have been identified in the literature?

Following the arguments by Sofaer and Strech (Citation2012) about the need for comprehensive reviews of reasons in ethics, we decided to adopt a maximally broad research question in order to comprehensively summarize the current state of the ethical debate about self-tracking.

Consequently, we focus on the ethical aspects of self-tracking rather than merely the ethical issues. In our view, a review of the ethics of self-tracking should provide the readers with a full picture of the ethical dimension of the practice. For this reason, we also summarize the positive aspects of self-tracking, as their inclusion could allow readers to evaluate whether the risks posed by self-quantification are justified by the potential benefits.

We also added a complementary research question: Which theories and methods have been used to identify ethical aspects of self-tracking? We adopt a pluralistic view of ethics and believe that competing methods are more likely to provide different, complementary results rather than overlap. Consequently, we believe that an overview of theories and methods used in the works extracted for this review could help our readers determine the depth of the existing discussion and identify potential research gaps.

Eligibility criteria

In order to be considered for the review, extracted papers needed to fulfill the following criteria:

  1. Have the practice of self-tracking (as defined in the conceptual clarifications section) as their primary focus.

  2. Discuss ethical aspects of self-tracking.

  3. Be peer-reviewed journal articles, book chapters or books.

We examined the title, abstract and keywords of each of the papers extracted for the review to determine their eligibility for full reading. We then once again applied these eligibility criteria to each of the papers selected for full reading.

Although we recognize that self-tracking could be discussed by authors publishing on a variety of subjects (e.g., smartphones, AI systems, algorithms), the inclusion of all potentially relevant discussions would be impractical (in terms of the search strategy) and unfeasible (in terms of time commitment). Consequently, we limited the scope of the review to the works that deal primarily with self-tracking and specify so in the text.

Moreover, as the relevant literature is heterogenous and spans across disciplines, we decided to also include works that do not discuss the ethical aspects of self-tracking in an explicit manner. Particularly, authors from disciplines such as anthropology, ethnography and sociology might discuss normatively relevant features of self-tracking, but do not label them as ethical, instead using language such as “concerns,” “potential,” “challenges,” “opportunities” and others. We decided that the findings presented in these works are important for any review attempting at providing the full picture of the ethical dimension of self-tracking. Consequently, we accounted for this in the eligibility criteria (as works eligible for our review do not need to deal with ethical aspects of self-tracking in an explicit way) and designed a search strategy that would retrieve the maximum feasible number of relevant works from disciplines other than philosophy.

Finally, we decided to focus only on peer-reviewed journal articles, book chapters and books as these are the most important sources in the relevant disciplines (particularly in philosophy). We excluded conference materials from this search. While some of them are peer reviewed, the standard is not universal and the quality of conference materials can vary. The publication of conference materials is also less common in the disciplines relevant to our review. Additionally, the inclusion of conference materials would have saturated the search results, most often with not relevant publications.

Search strategy

This literature review is based on searches conducted in four databases: Scopus,Footnote3 WebofScience,Footnote4 Academic Search CompleteFootnote5 and PhilPapers.Footnote6 These databases comprise a significant number of journals from across various disciplines, facilitating the search of articles from ethics, as well as social sciences, anthropology, ethnography and other fields, which might address ethical aspects of self-tracking technologies and practices. The search strings were created on the basis of two sets of search terms. The first one was built around self-tracking and its most relevant synonyms and consists of terms: “self-track*,” “quantified self*,” “self-quantif*,” “personal informatic*,” “personal analytic*” and “lifelog*,” with asterisks added to include other forms of the searched words and quotation marks added to maximize the efficiency of the search.

The inclusion of the term “wearable*” was considered, but we ultimately decided against it for reasons of practicality and time. Inclusion of the term “wearable*” would have led to an initial inclusion of an extremely high number of texts that would then have to be manually excluded (e.g., for dealing with wearables that do not allow for self-tracking, such as smart glasses). The term “lifelog*” was included in the first set even though lifelogging is not the subject of this review and, as we argued in the previous section, is a practice distinct from self-tracking. However, despite differences between the two practices, the terms self-tracking and lifelogging are sometimes used interchangeably in the literature. Some authors discuss the practices as synonymous or overlapping (Jacquemard et al., Citation2014; Lupton, Citation2016a) while others consider self-quantification to be a subset of a more diverse group of lifelogging practices (Selke, Citation2016a). Consequently, it is impossible to determine outright that an article discussing what we would label the ethics of lifelogging (e.g., ethical aspects of personal digital archives, video lifelogs and other similar phenomena) would not provide some insight into practices of quantified self-tracking. To address this problem, we decided to include the term “lifelog*” in the search and manually exclude the works that deal with qualitative, retention-oriented collection of personal data (which do not fit the criteria of this review). This has been done on the basis of their titles, abstracts and keywords as it allowed us to manually select works that also use the term “lifelogging” in their discussion of what we have defined here as quantified self-tracking.

The second set included the terms ethic*, moral*, virtue* and norm*, thus aiming at returning works explicitly discussing ethical, moral and normative dimensions of self-tracking. The inclusion of the term “value*” was considered, but ultimately rejected as this would excessively saturate the search results with works discussing numerical values associated with quantified self-tracking. The sets were then combined into search strings adapted to suit the search engines used by each of the four databases (see ).

The initial search, which was conducted on 28 July 2020, returned 361 results across four databases, and the exclusion of duplicates left 212 results (see ). Subsequently, 62 results were excluded as they were not peer-reviewed journal articles, books or book chapters. Out of the remaining 150 results 5 were unavailable to us, which left 145 sources. The titles, abstracts, and keywords of available results were inspected to determine their eligibility for this review. Where abstracts were unavailable (e.g., in case of book chapters), we inspected the introduction.

Consequently, 71 results were rejected for not dealing with self-tracking or not dealing with ethical issues surrounding them. This step left us with 74 results, which were then selected for full reading. Full reading led to the rejection of 34 works leaving 40 identified results discussing ethical issues surrounding self-tracking (Ajana, Citation2017; Arora, Citation2019; Baker, Citation2020; Barassi, Citation2017; Borthwick et al., Citation2015; Cederström & Spicer, Citation2015; Daly, Citation2015; Danaher et al., Citation2018a, Citation2018b; Duus et al., Citation2018; Fotopoulou & O’Riordan, Citation2017; Gabriels & Coeckelbergh, Citation2019; Gertenbach & Mönkeberg, Citation2016; Gimbert & Lapointe, Citation2015; Ha, Citation2017; Hill, Citation2019; Hoy, Citation2016; Hull, Citation2018; Klauser & Albrechtslund, Citation2014; Kleinpeter, Citation2017; Klugman, Citation2018; Klugman et al., Citation2018; Kreitmair & Cho, Citation2017; Lanzing, Citation2016, Citation2019; Lifkova, Citation2019; Lomborg et al., Citation2020; Lupton, Citation2015b; Lupton & Smith, Citation2018; Maturo & Setiffi, Citation2015; Moore & Piwek, Citation2017; Morgan, Citation2016; Oravec, Citation2020; Owens & Cribb, Citation2019; Richardson & Mackinnon, Citation2018; Sanders, Citation2017; Sharon, Citation2017; Sharon & Zandbergen, Citation2017; Till, Citation2018; Toner, Citation2018)

In order to extract relevant articles from a greater number of disciplines and broaden the results of the database search, we engaged in backwards snowballing (Jalali & Wohlin, Citation2012; Wohlin, Citation2014). This was done with the aim of identifying works that discuss ethical concerns surrounding self-tracking, but do not label these concerns explicitly as ethical (using more general language instead, e.g., worries or downsides).

For this purpose, we examined each of the papers referenced in the works extracted through the database search and considered their inclusion in the review by applying the eligibility criteria (i.e., examining their titles, abstracts and keywords to assess whether they discuss ethical aspects of self-tracking). The same procedure was repeated for each of the works extracted through backwards snowballing. Moreover, where book chapters were examined, we looked at all the other chapters published in the same volume and considered their inclusion in the review by applying the same exclusion criteria as we did during all the previous steps. This was particularly justified as some of the books were specifically devoted to self-tracking and other related practices and technologies. These two steps (backwards snowballing and analysis of other chapters from given books) helped us identify 25 additional relevant works (Ajana, Citation2018a; Barta & Neff, Citation2016; Challa et al., Citation2017; Crawford et al., Citation2015; Frank & Klincewicz, Citation2018; Gabriels & Moerenhout, Citation2018; Kreitmair, Citation2018; Li & Hopfgartner, Citation2016; Lupton, Citation2013, Citation2015a, Citation2016a, Citation2016b; Martens & Brown, Citation2018; Moore, Citation2017; Moore & Robinson, Citation2016; Morozov, Citation2013; Neff & Nafus, Citation2016; Nissenbaum & Patterson, Citation2016; Piwek et al., Citation2016; Schulz, Citation2016; Selke, Citation2016a, Citation2016b; Sharon, Citation2018; Swirsky & Boyd, Citation2018; Till, Citation2014). Out of these, 21 were extracted through the analysis of the references and four (Li & Hopfgartner, Citation2016; Schulz, Citation2016; Selke, Citation2016a, Citation2016b) were extracted through the consideration of other chapters in books extracted during the previous steps.

It is worth mentioning that one of the papers extracted through the search (Danaher et al., Citation2018a) was a response to open peer criticism of an earlier paper by the same authors (Danaher et al., Citation2018b) which made up for an entire issue of American Journal of Bioethics. Although two of the papers found in the discussed issue of the journal were initially returned by our search (Hull, Citation2018; Klugman, Citation2018), further five were extracted through backwards snowballing (Frank & Klincewicz, Citation2018; Kreitmair, Citation2018; Martens & Brown, Citation2018; Sharon, Citation2018; Swirsky & Boyd, Citation2018), which might at least partially account for the high number of papers extracted in this step.

Admittedly, even with the above taken into account, the number of results added through backwards snowballing is high in comparison to the number of results extracted through database searches. However, as noted in section 2, a significant portion of research done on self-tracking technologies and practices has been conducted by authors representing other disciplines than ethics and philosophy. Researchers from fields such as sociology, anthropology and ethnography published a relatively large number of highly influential papers discussing the ethical aspects of self-tracking technologies, but they often did not use the same terms employed in philosophical research (i.e., moral or ethical), focusing instead on outlining the concerns surrounding self-tracking or the potential negative aspects connected to its development. We decided that a thorough literature review on the ethical aspects of self-tracking would have to cover these discussions and we considered two potential methods of extracting the relevant work.

The first involved broadening the second set of search terms by using terms such as “concern*,” “challeng*,” “disadvantag*,” “opportunity*” and others. However, expanding the second set of search terms in this manner drastically increased the number and scope of results, thus making it impossible to conduct this review in a reasonable amount of time. Consequently, we decided to engage in a thorough procedure of backwards snowballing (as discussed above), assuming that ethically relevant papers dealing with self-tracking should be referenced by the authors of works extracted through our search methodology focusing solely on explicitly ethical literature. In our view, this methodological approach had the biggest chances of extracting a maximum number of works relevant to this literature review while remaining feasible from a practical standpoint. As a result of the database searches, backwards snowballing and examination of book chapters, a final number of 65 journal articles, books and book chapters will be discussed in this literature review.

Data extraction and analysis

Arguably, there are no universally used methods for interpretation and analysis of works in the field of ethics and authors of other reviews rarely refer to specific approaches (Kahrass et al., Citation2021; Mertz et al., Citation2016, Citation2017). Although Kahrass et al. (Citation2021) recently attempted to provide exhaustive guidelines for conducting reviews of ethical literature, they also contended that due to a lack of established methods, it might be the most feasible for authors to merely stipulate which kind of analysis they performed and present each of its steps.

Our review is consistent with the views presented by Sofaer and Strech (Citation2012) on reviews of reasons in (bio)ethics and attempted to synthesize the current state of the literature in maximum detail. In order to limit the influence of our ethical convictions and preferred theories, we did not construct a priori categories that would guide our analysis and systematize the findings of the reviewed literature. Instead, we adopted an inductive approach and decided to develop a set of ethical aspects of self-tracking on the basis of those found within the literature.

We first engaged in close reading of the text to identify the main arguments made by the authors and the methods and theories they used for their analysis. We then tagged individual ethical aspects by identifying either the features explicitly labeled by the authors as ethical (e.g., through the use of terms such autonomy, privacy, and others), or the features labeled by the authors as areas of concern, challenges, opportunities and others. We drew on our ethical expertise to model the latter set of aspects in ethical terms.

Following the initial tagging, we combined individual ethical issues into larger categories. This was done to simplify the structure of the review and make our synthesis easier to follow. For example, authors identified several unrelated ways in which self-tracking data can be used to harm the users (e.g., discrimination, use of data in criminal activity, function creep, stalking), but for the purposes of this review they were grouped into one category (Data as basis of harm). When creating these general categories, we looked for instances where different authors discussed relatively similar ethical aspects (e.g., different notions of privacy), or where there was significant overlap between the discussions, even if the arguments were modeled in different terms. This was the case, for example, in the discussion about privacy and surveillance which was subsumed under a single category.

Following these procedures, we were able to identify 13 categories of ethical aspects of self-tracking technologies discussed in the literature. As three of these categories relate to the opportunities connected to self-tracking, while ten others can be considered challenges associated with these technologies and practices, we created two higher level categories to distinguish the positive and negative aspects of self-tracking. The division, as well as the number of works at least mentioning each of the categories is reflected in , and the categories themselves are introduced in more detail in their specific sections.

After the initial tagging and creation of categories, we held consensus meetings to determine the validity of our grouping and agree on the reconstruction of individual ethical aspects. The authors were involved as follows: MW prepared the search strategy, ran the search and performed the initial analysis and coding. BG reviewed the search strategy and the coding. FOB and YS independently reviewed the coding. All the authors participated in the consensus meetings and reviewed the final analysis.

In some cases, an overlap between categories can be observed. Concerns connected to privacy are closely related to the ownership of data, whereas social harms can be associated with either harmful design or data being used to inflict specific harms. However, each of the categories is distinct enough on its own and was discussed by authors in enough detail to warrant separate treatment. Some authors only presented a superficial or unoriginal discussion of the ethical aspects of self-tracking discussed in each of the categories. Where this was the case, we first reconstruct the most common and cursory arguments and later proceed to highlighting the more original and in-depth discussion. However, to ensure that our review provides a full picture of the current state of the debate on the ethical aspects of self-tracking, we mention each of the papers falling under each category and provide enough details to give readers an overview of the arguments present in the debate.

Moreover, we attempted to categorize the methodologies and theoretical approaches that allowed the authors to arrive at their findings, but this proved difficult. We were unable to identify the adopted methodologies and theoretical approaches of 26 works discussed in this review (i.e., the authors did not make their methods and theoretical positions explicit, and it was not possible to deduce that from the content of their work or the reference list) and further 21 works contained references to various ethnographic and sociological methods such as interviews with users or analysis of marketing materials, app descriptions and terms of service. Excluding ethnographic and sociological methods, the philosophy of Michel Foucault was the most commonly invoked context for the analysis of self-tracking technologies and practices, with 8 works explicitly referring to his theories as a basis for the analysis (Ajana, Citation2017; Baker, Citation2020; Fotopoulou & O’Riordan, Citation2017; Gabriels & Coeckelbergh, Citation2019; Klauser & Albrechtslund, Citation2014; Lifkova, Citation2019; Richardson & Mackinnon, Citation2018; Sanders, Citation2017). However, as noted in the findings (section 5.1.1 on Empowerment and well-being), Lupton (Citation2016a) also mentioned Foucault’s idea of technologies of the self and biopolitics without explicitly presenting a Foucauldian approach to self-tracking across her entire work. Other approaches used by more than one author included Marxism (Moore, Citation2017; Moore & Robinson, Citation2016; Schulz, Citation2016; Till, Citation2014), Science and Technology Studies (Ha, Citation2017; Klauser & Albrechtslund, Citation2014; Li & Hopfgartner, Citation2016) and phenomenology (Kreitmair, Citation2018; Kreitmair & Cho, Citation2017). Some theoretical and methodological approaches were only listed in one paper each. These include “neuroethics” (Kreitmair & Cho, Citation2017),Footnote7 “contextual integrity” (Nissenbaum & Patterson, Citation2016), legal studies (Challa et al., Citation2017), philosophy of Paul Virilio (Hill, Citation2019), philosophy of Bruno Latour (Klauser & Albrechtslund, Citation2014), “feminist analytics” (Sanders, Citation2017), “ironies of automation” (Baker, Citation2020), “vital normalism” (Gertenbach & Mönkeberg, Citation2016) and a literature review (Morgan, Citation2016).Footnote8 It has to be noted, however, that some papers incorporated more than one approach (e.g., Sanders (Citation2017) used both Foucauldian philosophy and what she called “feminist analytics”), while it was also difficult to assess what some authors meant by invoking a specific theory or concept (e.g., Gertenbach and Mönkeberg (Citation2016) referring to “vital normalism”).

Findings

Opportunities

Empowerment and well-being

The effects self-tracking can have on user empowerment and well-being are mentioned in the literature extensively. However, a significant number of authors only make general statements about these aspects that echo the promises of self-tracking professed by enthusiasts and marketing materials, and they often do so as a way to introduce their discussions of other aspects of self-quantification (Crawford et al., Citation2015; Ha, Citation2017; Kreitmair & Cho, Citation2017; Lanzing, Citation2016; Li & Hopfgartner, Citation2016; Lupton, Citation2015a, Citation2015b; Lupton & Smith, Citation2018; Moore & Piwek, Citation2017; Morozov, Citation2013; Selke, Citation2016a; Sharon, Citation2018). The benefits mentioned in the cursory discussions include increased self-knowledge, improved wellbeing, increased productivity, greater fitness, facilitated achievement of goals, facilitated management of habits and behavior, and facilitated decision-making through the use of personalized recommendations.

Ajana (Citation2017) reiterates those advantages, but argues that people have already been achieving similar goals through non-digital means of measurement. However, she notes that self-tracking technologies are much easier to use than analogue means of tracking and their relatively low price makes them more readily available to the general public. A similar point is made by Neff and Nafus (Citation2016) who argue that these technologies are valuable to users because they create opportunities to engage in self-tracking experiments that would not have been possible otherwise and consequently allow users to gain more control over their habits and bodies. Referring to the ethos of self-experimentation present in the self-tracking community, Sharon (Citation2017) observes that the empowering potential of self-tracking technologies can be the greatest when users modify their devices and establish personal parameters and categories that are most in line with their individual needs and goals. In her eyes, the autonomy of users is increased when they actively engage with the technology instead of passively accepting the software and platforms supplied by technology companies.

Several authors (Danaher et al., Citation2018a; Duus et al., Citation2018; Lupton, Citation2013; Sharon & Zandbergen, Citation2017) discuss self-tracking technologies in the context of human enhancement. In their view, self-tracking devices can serve as technological tools for extending the users’ decision-making capabilities (Danaher et al., Citation2018a; Duus et al., Citation2018), “developing new senses” by increasing the users’ epistemic capacitiesFootnote9 (Sharon & Zandbergen, Citation2017, p. 1700), and overcoming the limitations of the human body (Lupton, Citation2013). However, Owens and Cribb (Citation2019) note that even though self-tracking technologies can enhance the autonomy of their users, they cannot change the material circumstances which influence the number and character of available choices. Consequently, users with higher income and greater life chances might benefit from the autonomy-enhancing effects of self-tracking technologies to a much greater extent as they have more means and opportunities to reap the benefits of quantification.

Self-tracking has also been discussed as contributing to users’ self-confidence, helping them deal with anxiety or feelings of inadequacy, and providing motivation and reassurance (Barassi, Citation2017; Duus et al., Citation2018; Lomborg et al., Citation2020; Owens & Cribb, Citation2019). In such cases, users can rely on data generated by the devices to improve their self-esteem and battle negative feelings. Relatedly, Sharon and Zandbergen (Citation2017) argue that self-tracking could be compared to “a practice of mindfulness,” as it allows the users to become aware and narrate phenomena that would otherwise be outside of their view. Sharon (Citation2017) additionally observes that the use of self-tracking data can broaden the existing self-knowledge by supplying additional information. On a similar note, self-tracking has been described as a Foucauldian technology of the selfFootnote10 (Fotopoulou & O’Riordan, Citation2017; Gabriels & Coeckelbergh, Citation2019; Lupton, Citation2016a; Richardson & Mackinnon, Citation2018). However, it is unclear whether the authors simply want to refer to the drive for self-improvement and self-care inherent in Foucault’s concept of the technologies of the self, or whether they also want to highlight self-tracking as a tool for moral deliberation. In fact, only Gabriels and Coeckelbergh (Citation2019) mention self-tracking in the context of “moral improvement and reflection,” but in their view, this dimension is less pronounced than the orientation toward fitness.

The empowering aspects of self-tracking technologies are observed in the context of work as well. Till (Citation2018) claims that the deployment of wearable devices in workplace wellness schemes could help workers improve their wellbeing, while simultaneously increasing their productivity and thus also proving beneficial to the employers. Moreover, in his view the association of work with greater health and wellbeing could additionally help workers attach more meaning to their professional activity. On the other hand, Schulz (Citation2016) analyses self-tracking from a Marxist perspective, which considers labor as alienating the workers and subsuming their leisure time into processes of production. However, in his view, self-tracking technologies could help workers better manage their free time and ensure that they spend it in quality ways that are more in line with their needs.

Finally, Borthwick et al. (Citation2015) notice that when used in educational contexts, wearable self-tracking tools could increase student engagement and allow teachers to develop new teaching strategies. Moreover, data supplied through these devices could help educators better identify the needs of students with physical disabilities.

Contribution to health goals

Some authors mention how the use of self-tracking devices can help individuals improve their own health by monitoring factors relevant to their condition (Ajana, Citation2017, Citation2018a; Gabriels & Moerenhout, Citation2018; Gimbert & Lapointe, Citation2015; Ha, Citation2017; Lupton, Citation2015b, Citation2016a; Neff & Nafus, Citation2016; Piwek et al., Citation2016; Selke, Citation2016a, Citation2016b; Sharon, Citation2017). This can be particularly relevant for individuals living with chronic illnesses that already require them to keep track of a significant number of factors (e.g., diabetics who need to monitor their glucose levels, and people with a cardio-vascular conditions who need to track their blood pressure). According to the aforementioned authors, self-tracking can facilitate everyday management of health, give patients more control over their health, and reduce reliance on professional care, thus reducing healthcare costs and contributing to public health overall. Regular physical activity which is often associated with self-tracking is also seen as improving individual health, for example, by reducing risk of obesity and cardiac disease (Li & Hopfgartner, Citation2016). Moreover, self-tracking devices can significantly increase treatment adherence by reminding patients to follow doctors’ recommendations and take their medicine at prescribed intervals (Klugman et al., Citation2018; Neff & Nafus, Citation2016). It has to be noted, however, that according to Piwek et al. (Citation2016) there is still not enough evidence supporting claims about medical benefits of self-tracking, and many of the promises of self-tracking for health may only be fulfilled in the future.

Furthermore, self-tracking devices can contribute to public health through health promotion and contribution to increases in health awareness and literacy (Lupton, Citation2015b, Citation2016a; Maturo & Setiffi, Citation2015). For example, Lupton (Citation2015b) notes that fertility and sexual activity tracking applications could greatly educate the general public about sexually transmissible diseases and reproductive health. Such promises have led many healthcare professionals to believe that self-tracking can increase patient participation and democratize medicine (Ajana, Citation2018a). Interestingly, positive public health outcomes do not necessarily have to be connected to health-related initiatives. According to Till (Citation2018), corporate wellness schemes involving self-tracking devices often aim at increasing workers’ physical activity, which is believed to have a positive impact on their productivity. However, he argues that this can have a side effect of improving workers’ health overall.

Finally, self-tracking data is believed to hold great promises for advancing medical research and preventing or eliminating certain diseases (Ajana, Citation2017; Li & Hopfgartner, Citation2016; Neff & Nafus, Citation2016; Sanders, Citation2017; Sharon, Citation2017), especially as self-tracking devices enable continuous collection of health related data and can help medical professionals gather information even outside the medical context (Sharon, Citation2017).

Community and solidarity

As many self-tracking tools provide opportunities and encourage users to share data with others, some authors contend that this could lead to community-formation and feelings of solidarity (Ajana, Citation2017; Barta & Neff, Citation2016; Crawford et al., Citation2015; Lupton, Citation2015a, Citation2016a; Sharon, Citation2017, Citation2018; Sharon & Zandbergen, Citation2017), especially if the sharing takes place in the structured context of the Quantified Self community (Barta & Neff, Citation2016; Crawford et al., Citation2015; Sharon, Citation2017; Sharon & Zandbergen, Citation2017). Engaging in self-tracking might additionally make it easier for users to become part of communities that are oriented toward fitness or technological gadgets (Crawford et al., Citation2015; Fotopoulou & O’Riordan, Citation2017). Disclosing individual interests, vulnerabilities and shared values in datafied form is seen as improving trust and mutual understanding, which helps users establish long-lasting reciprocal relationships and maintain existing ones (Barta & Neff, Citation2016; Neff & Nafus, Citation2016, pp. 146–47). This is aided by the fact that some users find it easier to share intimate or embarrassing details in quantified rather than narrative ways (Sharon & Zandbergen, Citation2017).

Additionally, discussion and aggregation of self-tracked data can help users attribute more meaning to their data, as well as generate insights that would not have been available at the individual level. This not only gives the sharing community a sense of purpose, but can also motivate users to contribute their data solely for the benefit of others (Lupton, Citation2015a). Similarly, data-centered communities can come together in establishing their own goals for the use of data, which might go against the interests of third parties that are using the data for their own purposes (Barta & Neff, Citation2016). Moreover, by analyzing data about others, users of self-tracking technologies can gain new and richer knowledge of other people and their relationships with them, which can create a feeling of proximity (Gabriels & Coeckelbergh, Citation2019; Lupton, Citation2015b).

On the other hand, even those enthusiastic about the community-generating potential of self-tracking technologies note that the kinds of solidarity and community enacted through such means can be problematic. It is difficult to determine which shared interests and values, if any, serve as a foundation of a community that could be formed through the sharing of data (Barta & Neff, Citation2016; Sharon, Citation2017), although according to Barta and Neff (Citation2016), this ambiguity is understood by communities such as Quantified Self as something positive, since it allows for the possibility of multiple values and interests coexisting within a single group.

Nevertheless, it has been noted that a community centered around the sharing of data would be a narrow one and it might include only those already willing and able to engage in self-tracking practices (Ajana, Citation2017; Sharon, Citation2017, Citation2018). As noted by Ajana (Citation2017), it is possible that within the self-tracking world, data sharing and solidarity are being conflated and treated as synonymous. On the other hand, even if communities based on data-sharing are indeed narrower, they might be qualitatively different than the richer communities that are formed in different contexts and this might still provide value to their members (Sharon, Citation2018). Moreover, according to Sharon the parameters with which self-tracking practices are concerned are not unique to self-trackers, which means that self-tracking communities are, at least in theory, open to a broader range of members and can bond with them through an appeal to universally shared human qualities (Sharon, Citation2017).

Additionally, it is possible that users could engage with others only because they are prompted by their apps (Fotopoulou & O’Riordan, Citation2017). This is closely connected to Ajana’s observations that the framing of data sharing as a solidary activity and a contribution to public good lies within the commercial interest of technology companies that profit from sale and access to users’ self-tracking data (Ajana, Citation2017, Citation2018a). On a similar note, Till claims that the interpersonal dimension of self-tracking is most visibly manifested in the value extracted by the corporations from the collectively generated data (Till, Citation2014). As such, he believes that the form of community generated through self-tracking would be most akin to communities formed in the workplace, as it is the shared (digital) labor that provides a basis for bonding.

Concerns

Social harms

The first ethical issue relating to the social harms of self-tracking and simultaneously the ethical challenge that is most discussed in the literature connects to what we label the norm-prescribing and norm-enforcing effects of self-tracking technologies. Some authors argue that self-tracking technologies are not neutral in their measurement as they present certain thresholds and target scores as recommended, typical or optimal. This might have the added effect of entrenching these scores as normative standards or “correct” results, suggesting that everything outside of a predetermined range might be substandard or abnormal (Ajana, Citation2017; Barassi, Citation2017; Danaher et al., Citation2018b; Fotopoulou & O’Riordan, Citation2017; Lanzing, Citation2019; Lifkova, Citation2019; Lupton, Citation2016a; Moore, Citation2017, p. 10, 17; Moore & Piwek, Citation2017; Neff & Nafus, Citation2016, pp. 38–44; Owens & Cribb, Citation2019; Sanders, Citation2017; Selke, Citation2016b; Sharon, Citation2017; Sharon & Zandbergen, Citation2017; Toner, Citation2018).

The normative dimension of self-tracking technologies has been criticized from several standpoints. Companies rarely disclose their motivations for promoting specific ideals and thresholds in their apps and devices (Baker, Citation2020; Crawford et al., Citation2015) and they often privilege concrete, usually dominant perspectives, e.g., those of young, fit, white, male users (Barassi, Citation2017; Lupton, Citation2016a; Moore, Citation2017, p. 17; Nissenbaum & Patterson, Citation2016; Sharon, Citation2017). As such, they are also seen as replicating existing, often harmful, stereotypes (Sharon, Citation2017). This is particularly evident in relation to stereotypes concerning gender and sexual orientation (Danaher et al., Citation2018b; Klugman, Citation2018; Lupton, Citation2015b, Citation2016a; Sanders, Citation2017), for example, when self-tracking technologies assume by default that women of a certain age should be trying to get pregnant, regardless of the life plans of a particular user. Moreover, while discussing how self-tracking devices frame obesity and diet, Maturo and Setiffi (Citation2015) note that these technologies do not acknowledge the possible limitations or risks inherent in them, and they do not reference alternative norms and standards that can be applied to the tracked factors.

Moreover, authors observe that the normative dimension of self-tracking technologies can be internalized by the users, thus instilling in them a sense of obligation to conform to the norms endorsed by the apps and devices (Ajana, Citation2017; Fotopoulou & O’Riordan, Citation2017; Gertenbach & Mönkeberg, Citation2016; Moore, Citation2017, p. 17; Owens & Cribb, Citation2019; Sharon, Citation2017; Sanders, Citation2017). As noted by Oravec (Citation2020), this can be particularly problematic when users try to reach the targets that do not fit their individual circumstances. Those who are unable to achieve the scores labeled by the designers of technology as normal might develop a sense of inadequacy (Baker, Citation2020; Moore & Piwek, Citation2017), especially as designers of self-tracking technologies could deliberately move the targets so that they remain unreachable in order to convince users that they need to continue working on themselves (Baker, Citation2020).

Three of the works discussed in this review (Gertenbach & Mönkeberg, Citation2016; Klauser & Albrechtslund, Citation2014; Kleinpeter, Citation2017) support the claim that self-tracking technologies can enforce particular norms, but their authors do not agree that this happens in line with the standards pre-determined by the developers. Instead, they all propose that the norms promoted through self-tracking are flexible in character and dynamically change on the basis of supplied data. Consequently, the users are required to engage in “a constant process of optimization” (Klauser & Albrechtslund, Citation2014, p. 283), develop “a flexible and adaptable dispositif within themselves, which enables them to compare themselves to [...] various calculations of normality” (Gertenbach & Mönkeberg, Citation2016, p. 36), and enter a state of “obsessive quantification of oneself” in which “no one would ever be normal enough” (Kleinpeter, Citation2017, p. 247).

Due to their normative dimension, self-tracking devices are discussed as tools of societal control. A number of authors argue that by monitoring deviations from the norm, self-tracking technologies could be used by decision-makers to push users into patterns of behaviors deemed “standard” or “normal” (Crawford et al., Citation2015; Lupton, Citation2016a; Morgan, Citation2016; Richardson & Mackinnon, Citation2018; Sanders, Citation2017; Selke, Citation2016b; Sharon & Zandbergen, Citation2017). Direct coercion, punishments and incentives are seen by authors as particularly effective on that front (Crawford et al., Citation2015; Hoy, Citation2016; Neff & Nafus, Citation2016, p. 135; Selke, Citation2016b; Toner, Citation2018), but self-disciplining effects on users were discussed as well (Lupton, Citation2016a; Morgan, Citation2016; Sanders, Citation2017). Self-tracking technologies are also seen as enabling those in power to control the users by supplying information about them and their actions (Baker, Citation2020; Cederström & Spicer, Citation2015; Gabriels & Coeckelbergh, Citation2019; Lanzing, Citation2019; Moore, Citation2017, p. 10; Selke, Citation2016a). This can be seen as closely connected to surveillance (see the section on privacy and surveillance). On that note, three works consider self-tracking technologies to be an extension of Deleuzian “control society” (Gertenbach & Mönkeberg, Citation2016; Moore, Citation2017, p. 211; Sanders, Citation2017).Footnote11 Similarly, several authors voiced concerns that the use of self-tracking technologies can shift power relations in a way that disadvantages the users, for example, by giving employers more information about the employees (Moore & Piwek, Citation2017; Nissenbaum & Patterson, Citation2016), or when data is taken over and exploited by large corporations and governments, resulting in a transfer of power from the users (Ajana, Citation2017, Citation2018a; Baker, Citation2020; Hull, Citation2018; Lupton, Citation2016a).

It has been noted by many authors that self-tracking prompts users to internalize a compulsion to productivity (Ajana, Citation2017; Fotopoulou & O’Riordan, Citation2017; Maturo & Setiffi, Citation2015; Moore, Citation2017; Neff & Nafus, Citation2016; Nissenbaum & Patterson, Citation2016; Richardson & Mackinnon, Citation2018; Till, Citation2018) and a desire to be entrepreneurial citizens or one to manage themselves efficiently in a business-like manner (Cederström & Spicer, Citation2015, p. 104; Ha, Citation2017; Lupton, Citation2013, Citation2015a, Citation2016a; Lupton & Smith, Citation2018; Maturo & Setiffi, Citation2015; Moore, Citation2017). Moreover, self-tracking has been criticized for encouraging an organization of life in line with the needs of the economy (Neff & Nafus, Citation2016, pp. 128–31; Schulz, Citation2016; Selke, Citation2016b), for example, by framing leisure time as a period in which users should rest to regain energy for future work (Neff & Nafus, Citation2016, p. 129). Consequently, many authors argue that self-tracking reinforces the dominant narratives of neoliberal ideology and encourages users to conform to neoliberal values (Ajana, Citation2017; Cederström & Spicer, Citation2015, p. 104; Ha, Citation2017; Lupton, Citation2013, Citation2015a, Citation2016a; Moore, Citation2017; Moore & Robinson, Citation2016; Sanders, Citation2017; Selke, Citation2016a).

The normative dimension of self-tracking has been discussed by authors in connection with the question of responsibility as well. The narrative surrounding normal, standard and recommended results is seen as creating a sense of responsibility or obligation to track and manage more and more parameters concerning one’s life, such as wellness, productivity, fitness, diet and others (Danaher et al., Citation2018b; Gertenbach & Mönkeberg, Citation2016; Lifkova, Citation2019; Lomborg et al., Citation2020; Lupton, Citation2013, Citation2015a, Citation2016a, Citation2016b; Moore & Robinson, Citation2016; Richardson & Mackinnon, Citation2018; Sanders, Citation2017). Individuals not willing to self-track might be stigmatized as irresponsible (Lupton, Citation2015b; Selke, Citation2016b) or even failing to meet a moral imperative (Richardson & Mackinnon, Citation2018; Sanders, Citation2017; Sharon, Citation2017).

On the other hand, the discourse surrounding self-tracking has been noted to shift responsibility from wider social, legal and institutional systems to individuals. One example of that is the pressure put on users to assume responsibility for managing their own data instead of expecting companies and institutions to preserve users’ privacy (Lanzing, Citation2019; Lomborg et al., Citation2020). Furthermore, authors claim that self-tracking technologies promote a narrative of personal responsibility in the area of health where these technologies are seen as promoting the (neoliberal) idea that staying healthy is primarily an individual effort and not something that should be secured through an institutionalized healthcare system or social support (Ajana, Citation2017, Citation2018a, Citation2018a; Danaher et al., Citation2018b; Fotopoulou & O’Riordan, Citation2017; Ha, Citation2017; Hull, Citation2018; Lupton, Citation2013; Maturo & Setiffi, Citation2015; Sanders, Citation2017; Sharon, Citation2017, Citation2018). Additionally, one work observed that in addition to health the same process occurs in the context of general self-improvement, including fitness, productivity and others (Gertenbach & Mönkeberg, Citation2016). Such individualization of responsibility places a greater burden on individuals as self-management through self-tracking requires substantial effort (Morgan, Citation2016; Neff & Nafus, Citation2016, p. 56; Sanders, Citation2017; Selke, Citation2016b). The rise in popularity of self-tracking devices can also have the added effect of shifting attention away from decision-makers, so that instead of facilitating addressing systemic and social causes of ill-health through institutions and policy, they place even greater emphasis on individual management of health, well-being, fitness and other trackable factors (Danaher et al., Citation2018b; Daly, Citation2015; Maturo & Setiffi, Citation2015; Morozov, Citation2013, p. 253; Sharon, Citation2017; Richardson & Mackinnon, Citation2018).

Finally, authors note that the deployment of self-tracking technologies could increase existing inequalities. As the adoption of a new device involves significant costs and requires technical and cognitive capabilities, it is observed that some people are unlikely to be able to afford to self-track (Gimbert & Lapointe, Citation2015; Kleinpeter, Citation2017; Klugman et al., Citation2018; Lomborg et al., Citation2020). As a result, self-tracking will potentially provide the most benefits to those individuals who are already in the higher strata of society, while leaving others behind (Borthwick et al., Citation2015; Gabriels & Moerenhout, Citation2018; Morozov, Citation2013, p. 240), especially as the underprivileged are often less healthy and face greater cost of managing their own health already (Richardson & Mackinnon, Citation2018), while their ability to manage their health and behavior might be more limited by the existing social conditions (Owens & Cribb, Citation2019; Neff & Nafus, Citation2016, p. 160).

Privacy and surveillance

Many authors refer to privacy risks surrounding self-tracking that need to be scrutinized, but a significant portion of them do not elaborate on the subject (Borthwick et al., Citation2015; Challa et al., Citation2017; Crawford et al., Citation2015; Daly, Citation2015; Fotopoulou & O’Riordan, Citation2017; Gabriels & Coeckelbergh, Citation2019; Gabriels & Moerenhout, Citation2018; Ha, Citation2017; Hoy, Citation2016; Lupton, Citation2015a, Citation2016b; Moore, Citation2017, p. 25; Moore & Piwek, Citation2017; Oravec, Citation2020; Swirsky & Boyd, Citation2018). Some of the authors treat privacy as something self-explanatory and intrinsically valuable, while others seem to feel the need to at least mention it even while they focus on other ethical aspects of self-tracking. However, some highlight privacy concerns in connection to a more detailed discussion connected to regulation of self-tracking data. Such arguments are reconstructed in the section on regulation and enforcement of rules.

Several authors observe that privacy risks connected to self-tracking data are further exacerbated by substandard security of these devices as well as the common use of cloud storage and network connectivity (Kreitmair & Cho, Citation2017; Lanzing, Citation2016; Li & Hopfgartner, Citation2016; Lupton, Citation2015b, Citation2016a; Neff & Nafus, Citation2016, p. 173; Nissenbaum & Patterson, Citation2016; Selke, Citation2016a). Moreover, the practice of anonymization of user data in aggregated data sets is considered by the authors as insufficient for the protection of privacy, as the data can be easily deanonymized (Ajana, Citation2017, Citation2018a; Arora, Citation2019; Neff & Nafus, Citation2016, pp. 63–64; Piwek et al., Citation2016). Individual users might also be concerned about the privacy of bigger data sets to which they contribute (Ajana, Citation2017) and other people’s personal information might be associated with individual users’ data, for example, when self-tracking devices are used in the context of a romantic relationship (Danaher et al., Citation2018a). As a result, Arora (Citation2019) suggests that user privacy could be safeguarded only if developers of self-tracking technologies are required to enter into a fiduciary relation with their clients.

Moreover, even if privacy breaches do not lead to direct user harms, they can still violate users’ contextual expectations relating to their personal information (Klauser & Albrechtslund, Citation2014; Lanzing, Citation2016, Citation2019; Neff & Nafus, Citation2016, p. 64; Nissenbaum & Patterson, Citation2016). According to this view, even if users consent to share their data in a particular context, privacy breaches disrupt the expected information flows and might make the users uneasy or even make them lose their sense of security and agency. Additionally, it has been observed that privacy violations can be particularly problematic in case of self-tracking technologies as these often collect highly sensitive information, such as data related to mental and physical health, sexual activity, eating habits and other additional information supplied to the device to increase the accuracy of tracking (Kreitmair & Cho, Citation2017; Lanzing, Citation2019; Li & Hopfgartner, Citation2016; Lupton, Citation2015b). Self-tracking data is often shared by default with third parties, and the vague wording of privacy policies might mean that users will remain unaware of potential violations of their privacy (Danaher et al., Citation2018b). Moreover, users who depend on self-tracking technologies for management of illness might face the choice of either giving up their privacy or not enjoying the medical benefits provided by the devices (Klugman et al., Citation2018).

Finally, it has been noted that the increased adoption of self-tracking technologies can change the general public’s attitude to privacy (Ajana, Citation2017, Citation2018a; Hull, Citation2018; Morozov, Citation2013, pp. 235–38). Ajana (Citation2017, Citation2018a) notes that privacy is often framed by enthusiasts and developers as an anachronistic concept that privileges individual values over the public good arising from the potential societal benefits of self-tracking. However, in her view this rhetoric serves the commercial interests of technology companies at the expense of individual users and the society as a whole. In turn, Hull (Citation2018) and Morozov (Citation2013) observe that the increasing popularity of self-tracking might result in a situation where collection and sharing of data will become the norm and those unwilling to give up their privacy will face steeper prices or will be forced to pay additional fees for premium services.

Surveillance is another issue related to privacy that is widely discussed by the authors. A significant number of them discuss the potential of self-tracking to be used for workplace surveillance through the employment of workplace wellness schemes and wearable devices designed to monitor the activity and productivity of supermarket and warehouse workers (Cederström & Spicer, Citation2015, pp. 102–108; Gabriels & Coeckelbergh, Citation2019; Ha, Citation2017; Lupton, Citation2016b; Moore, Citation2017; Moore & Robinson, Citation2016; Nissenbaum & Patterson, Citation2016; Selke, Citation2016a, Citation2016b). Interestingly, Moore and Robinson (Citation2016) observe that as workers often use self-tracking devices outside of the work context, companies might consequently gain the opportunity to monitor their workers during leisure time.

Authors mention the possibility that self-tracking data will be used for surveillance purposes by government institutions, especially in the context of healthcare (Barassi, Citation2017; Ha, Citation2017; Lupton, Citation2015a, Citation2016a; Morgan, Citation2016; Sanders, Citation2017; Sharon, Citation2017). This form of self-tracking-aided surveillance has been discussed in connection to concepts introduced by Foucault, namely biopolitics and biopower (Lifkova, Citation2019; Lupton, Citation2015a; Sanders, Citation2017; Sharon, Citation2017) as well as his interpretation of the panopticon (Lifkova, Citation2019; Lupton, Citation2016a; Sanders, Citation2017; Sharon, Citation2017). Surveillance through the use of self-tracking devices has also been scrutinized as a tool of societal control and norm-enforcement (Klauser & Albrechtslund, Citation2014; Lanzing, Citation2019; Sharon, Citation2017), issues which are discussed in the social harms section.

Finally, self-tracking has been connected with the phenomenon of sousveillance or co-veillance, that is, bottom-up monitoring done by the citizens and the monitoring of other people by their peers which can take place when other people access user data that is shared through self-tracking apps and on social media (Daly, Citation2015; Danaher et al., Citation2018b; Gabriels & Coeckelbergh, Citation2019; Lanzing, Citation2016; Lupton, Citation2015a, Citation2015b). As some of these authors note, although voluntary engagement in surveillance practices is not inherently problematic and might even bring positive outcomes (i.e., empowerment and a sense of community), it can open the users to abuse and exploitation from third parties (Danaher et al., Citation2018b; Lupton, Citation2015b) or impact social relations (Gabriels & Coeckelbergh, Citation2019).

Ownership, control and commodification of data

A number of authors observe that the question of who owns the data generated through the use of commercially purchased self-tracking devices is unresolved and controversial (Ajana, Citation2017, Citation2018a; Borthwick et al., Citation2015; Ha, Citation2017; Kreitmair & Cho, Citation2017; Lupton, Citation2016a; Morgan, Citation2016; Neff & Nafus, Citation2016, pp. 63–65; Piwek et al., Citation2016; Till, Citation2014). As a result, users might not have insight into what is collected (Lupton, Citation2016a; Piwek et al., Citation2016) as well as what are their legal rights related to self-tracking data (Lupton, Citation2016a), especially as the companies have commercial motivations to bind the generated data to their products (Till, Citation2014) and prepare legal documents regulating the ownership and use of data in ways that give them the most power over it (Kreitmair & Cho, Citation2017). Borthwick et al. (Citation2015) note that ownership of data becomes even more problematic when self-tracking devices are used by a minor and the generated data should, at least in theory, be the property of their parents. Ajana (Citation2017, Citation2018a) and Neff and Nafus (Citation2016, p. 65) argue that data should not be understood as property as this concept does not adequately reflect the needs and expectations of the parties involved. Neff and Nafus (Citation2016) believe that self-tracking data should be best described as a “child” of the user and the company supplying the device, as it could not be created without either of the parties. In their view, this framework should make it easier to determine the rights and responsibilities of the parties in relation to self-tracking data.

The unclear structure of data ownership is seen as giving companies near full discretion over how and with whom data is shared (Ajana, Citation2017, Citation2018a; Piwek et al., Citation2016). Authors who do not discuss issues connected to ownership note that access to data is often granted to third parties, which in their view could be problematic as users may be unaware and unable to control who uses or exploits their data and for what purposes (Barassi, Citation2017; Challa et al., Citation2017; Crawford et al., Citation2015; Daly, Citation2015; Gabriels & Coeckelbergh, Citation2019; Klauser & Albrechtslund, Citation2014; Lanzing, Citation2016, Citation2019; Li & Hopfgartner, Citation2016; Lifkova, Citation2019; Lupton, Citation2015b, Citation2016a). The design of these devices (Lanzing, Citation2016), their terms of use (Barassi, Citation2017; Lifkova, Citation2019), their connectivity (Daly, Citation2015; Gabriels & Coeckelbergh, Citation2019; Klauser & Albrechtslund, Citation2014) and the discourse of engagement and voluntariness that surrounds them (Lupton, Citation2015b) were all discussed as factors contributing to the users’ lack of awareness about the purposes and destinations of data sharing. On the other hand, Arora (Citation2019) argues that if users were granted more control over and insight into how data is shared, it could “paradoxically increase disclosure of sensitive information” (Arora, Citation2019, p. 183) as users could be manipulated into consenting to share data or they might not realize all the consequences of their consent. Nevertheless, he still claims that users cannot take it for granted that third parties that gain access to personal data will manage it in ethical and responsible ways.

The use of self-tracking devices as part of workplace wellness schemes is also seen as problematic as it allows employers to access the data collected by the devices worn by their employees, consequently reducing employees’ control over their personal information (Gabriels & Coeckelbergh, Citation2019) and giving the employers potentially excessive knowledge about the lives of their workers (Moore & Piwek, Citation2017; Nissenbaum & Patterson, Citation2016). Klugman et al. (Citation2018) note that the issue of access to personal self-tracking data could be contentious in the healthcare context as well, since patients are sharing their data, often voluntarily, with third parties that include healthcare providers, device manufacturers and friends and family. However, these parties are bound by different obligations in relation to the data and its confidentiality.

The commodification of self-tracking data and its integration into market processes is also discussed in the literature. However, most of the authors only raise concerns about the exploitative nature of developers’ practices of generating profits from users’ personal data by selling data to third parties or by monetizing it on their own, for example, through targeted advertising (Ajana, Citation2017, Citation2018a; Barta & Neff, Citation2016; Challa et al., Citation2017; Danaher et al., Citation2018a, Citation2018b; Hill, Citation2019; Kleinpeter, Citation2017; Kreitmair & Cho, Citation2017; Lupton, Citation2015b, Citation2016b; Piwek et al., Citation2016; Selke, Citation2016a; Sharon, Citation2017; Swirsky & Boyd, Citation2018).

A small number of authors attempt to expand such claims. Crawford et al. (Citation2015) observe that the value generated through the sale and use of self-tracking data is never returned to the users. Morozov (Citation2013, p. 235) claims that self-tracking data is of particular interest to investors, who are attempting to turn it into a separate “asset class,” while Lupton (Citation2015a, Citation2016a) observes that self-tracking extracts value from the human body. In her view, practices of self-tracking have been designed to facilitate the transformation of bodily information into tradeable commodities that can be described as “digital biocapital” (Lupton, Citation2016a).

A similar argument is pursued by Till (Citation2014), who claims that self-tracking is an attempt on the part of technology corporations to extract value from users’ exercise. In his view, whereas previously companies have been able to profit of people’s exercise activity only indirectly (e.g., by selling running shoes), self-tracking devices allow them to turn activity data into a digital commodity that can be sold for profit. Similarly, Schulz (Citation2016) observes that quantification of everyday activity reifies human bodies and phenomena such as well-being, and consequently subjects them to the logic of economy. He claims that as a result, leisure is integrated into labor processes and users’ free time is transformed to better fit the rules of capitalist production. On the other hand, Till (Citation2014) and Lupton (Citation2016a) argue that the use of self-tracking devices can itself be described as unpaid digital labor. They both claim that as a result of their activity, users’ labor is exploited as they provide monetizable content to the creators of self-tracking technologies without receiving any compensation (Lupton, Citation2016a), or even pay (i.e., by buying the device) to have surplus value extracted from their labor (Till, Citation2014). Fotopoulou and O’Riordan (Citation2017) similarly compare self-tracking to labor, but they focused on the amount of effort required to self-track, without mentioning the commercial dimension of the practice.

Autonomy

Authors note that the decision to adopt a self-tracking device is not always completely voluntary. For example, when fitness bands are used as part of a workplace wellness scheme, workers could face direct and indirect pressure (such as shame and stigmatization) to wear the device and contribute their data to their employer (Lupton, Citation2016b; Moore, Citation2017, p. 166; Moore & Piwek, Citation2017; Nissenbaum & Patterson, Citation2016; Oravec, Citation2020; Schulz, Citation2016). Moreover, as private health insurance companies can link self-tracking data to health insurance premiums, consumers might lose out on financial incentives or lose a significant amount of money if they opt out of using the devices (Li & Hopfgartner, Citation2016; Lupton, Citation2015a, Citation2016a; Neff & Nafus, Citation2016, p. 135; Maturo & Setiffi, Citation2015; Selke, Citation2016b). Similarly, where self-tracking technologies are used for medical purposes, patients might agree to share their health data only to avoid the risk of losing access to a potentially life-saving device (Klugman et al., Citation2018). Even in a private context, people might feel increasing pressure to engage in self-tracking and to share their data as these practices become more commonplace (Hull, Citation2018; Klugman, Citation2018). Additionally, when data is framed as valuable and beneficial to the community, those willing to opt-out of data sharing might be stigmatized as selfish or as having something to hide (Frank & Klincewicz, Citation2018; Neff & Nafus, Citation2016, p. 44; Nissenbaum & Patterson, Citation2016; Morozov, Citation2013, pp. 238–39). Lupton (Citation2016a, Citation2016b) observes that it is difficult to draw a line between self-tracking practices that are voluntary and those that are externally imposed on the users (for a more detailed discussion of coercive and punitive dimensions of self-tracking, see the section on social harms).

While the above paragraph delineates the loss of autonomy that might arise when users are forced or pressured to wear self-tracking devices, users’ autonomy can be eroded even when self-tracking is taken up voluntarily as the technology and the data it collects can influence users’ decision-making processes and behavior (Baker, Citation2020; Danaher et al., Citation2018a; Duus et al., Citation2018; Frank & Klincewicz, Citation2018; Klauser & Albrechtslund, Citation2014; Lanzing, Citation2019; Martens & Brown, Citation2018; Moore & Robinson, Citation2016; Owens & Cribb, Citation2019; Toner, Citation2018).

When users depend on the information supplied by their device to make choices, they delegate some of their agency to the device and, as a result, it can be difficult to determine the extent to which the devices influence user choice (Danaher et al., Citation2018a; Duus et al., Citation2018; Klauser & Albrechtslund, Citation2014; Martens & Brown, Citation2018). Consequently, users might feel that their actions are not entirely their own, but shaped by the devices (Duus et al., Citation2018). This can be particularly egregious when the devices operate in an untransparent manner, or frame information and present possible choices in an attempt to nudge or manipulate users into specific behavior without their knowledge and consent (Baker, Citation2020; Frank & Klincewicz, Citation2018; Lanzing, Citation2019; Owens & Cribb, Citation2019; Toner, Citation2018). Users’ autonomy can also be infringed when their personal information is made public. This can hinder users’ freedom of choice as other people’s knowledge and expectations might limit the availability of some decisions (Lanzing, Citation2016, Citation2019) or adversely impact their life chances (Lupton, Citation2015a). Similarly, autonomy can be reduced when the devices supply so much potentially relevant information that users find themselves overwhelmed and unable to make any decisions (Baker, Citation2020).

Furthermore, the use of self-tracking devices can make users dependent on them in decision making when the device’s authority takes precedence over the user’s autonomous decision-making capabilities (Martens & Brown, Citation2018). Moreover, as user experience is designed to maximize engagement, users can become addicted to their self-tracking devices (Oravec, Citation2020).

Finally, authors observe that self-tracking technologies can prompt users to engage in activity solely because of the high score that is attributed within an app, or treat it only as a means to achieve an overarching goal, such as fitness (Danaher et al., Citation2018b; Gabriels & Coeckelbergh, Citation2019; Hill, Citation2019; Klugman, Citation2018; Kreitmair, Citation2018; Kreitmair & Cho, Citation2017). In such cases, inherently valuable parts of life, for example, sex and relationships (Danaher et al., Citation2018b; Klugman, Citation2018; Kreitmair, Citation2018), can become instrumentalized and valued by the users only in relation to their impact on the tracked metrics (e.g., sex helping burn calories or contributing to higher mood scores). This can reduce users’ satisfaction with some pleasurable activities, such as walking (Kreitmair & Cho, Citation2017), or change the way users engage with them, for example, by treating them as something competitive (Danaher et al., Citation2018b; Gabriels & Coeckelbergh, Citation2019; Hill, Citation2019).

Data-facilitated harm

Some authors note that self-tracking data can serve as a basis for discrimination against users, although a significant portion of them simply state that possibility without providing a detailed discussion (Arora, Citation2019; Baker, Citation2020; Daly, Citation2015; Kreitmair & Cho, Citation2017; Lupton, Citation2015b, Citation2016b; Maturo & Setiffi, Citation2015; Sharon, Citation2017). Examples of discriminatory practices mentioned by these authors include denial of credit, loss of reputation, denial or loss of employment and denial of health insurance coverage.

The authors who present more developed arguments about the possibility of discrimination on the basis of self-tracking data often discuss discrimination related to health and employment as connected. They note that employers might be likely to discriminate against employees whose data portrays them as less healthy or even less active than average (Challa et al., Citation2017; Lanzing, Citation2016; Nissenbaum & Patterson, Citation2016; Selke, Citation2016b). In such cases, self-tracking data leads employers to conclude that an employee’s characteristics (e.g., living with diabetes or suffering from insomnia) might make them less productive or a “health” risk, thus increasing the price of insurance. As a result, they could be fired, denied employment or ignored when promotions are considered. Moreover, self-tracking data can result in users being refused health coverage or receiving higher prices for health insurance (Ajana, Citation2017, Citation2018a; Lanzing, Citation2019; Lupton, Citation2016a). Borthwick et al. (Citation2015) also noted that data collected through wearable devices in educational contexts might influence teachers’ decisions, potentially leading them to assessing their students incorrectly, although it is unclear whether in their view this should be attributed to discrimination, misinterpretation of data, or inaccuracy of the devices. Moreover, as noted by Ajana (Citation2017, Citation2018a), it is possible to discriminate against individuals even if their own data is unavailable. In her view, aggregated data could motivate discrimination against entire categories of people and not specific users. In this sense, people who do not use self-tracking devices could still become victims of discrimination on the basis of self-tracking data.

Authors additionally identify instances where aggregated self-tracking data could be used to motivate layoffs or shift work culture in the company in directions that are not in line with workers’ interests (Moore, Citation2017, p. 165; Moore & Piwek, Citation2017; Selke, Citation2016b). Similarly, it has also been observed that self-tracking data can serve as evidence against the user in criminal trials and insurance or disability claim disputes (Ajana, Citation2017, Citation2018a; Crawford et al., Citation2015; Lupton, Citation2016a; Neff & Nafus, Citation2016, p. 182; Oravec, Citation2020), for example, when Fitbit data invalidates a suspect’s alibi or is used to question a particular accident’s impact on user’s activity and health. This can be especially troubling as government agencies responsible for security already collect great amounts of information on individual citizens and self-tracking data can be of particular interest in this regard (Li & Hopfgartner, Citation2016).

Moreover, authors note that self-tracking technologies and the data they collect can facilitate harmful or even criminal behavior aimed against the users. Criminals could use data for identity fraud, illegal drug purchases, fraudulent health insurance claims and even when planning burglaries (Li & Hopfgartner, Citation2016; Lupton, Citation2016a, Citation2015a; Piwek et al., Citation2016). On this note, self-tracking technologies could help stalkers (Danaher et al., Citation2018a), facilitate voyeurism (Gabriels & Coeckelbergh, Citation2019) and even help abusers control their victims (Danaher et al., Citation2018b).

Additionally, as data is being accessed by an ever increasing number of actors, authors note that users could fall victim of “function creep,” resulting in harms that are still difficult to predict (Frank & Klincewicz, Citation2018; Hull, Citation2018; Lupton, Citation2016b; Sharon, Citation2017).

Datafication and interpretability of data

Many authors observe that the focus on quantified data in self-tracking technologies can be problematic in itself. One kind of arguments posits that self-tracking can privilege numerical ways of expressing particular phenomena, which are often framed as objective and science-driven, while neglecting the embodied, intuitive, experiential and narrative ways of generating knowledge about the self (Danaher et al., Citation2018a; Hill, Citation2019; Kreitmair & Cho, Citation2017; Martens & Brown, Citation2018; Moore, Citation2017, pp. 9–12; Morozov, Citation2013, p. 261; Neff & Nafus, Citation2016, p. 186; Schulz, Citation2016; Sharon, Citation2017, Citation2018; Sharon & Zandbergen, Citation2017). This can have the added effect of reducing epistemic self-confidence of the users as they could come to trust the data supplied by their devices over their own impressions of the tracked activity, and consequently become alienated from their body and senses (Danaher et al., Citation2018a; Duus et al., Citation2018; Klugman, Citation2018; Kreitmair, Citation2018; Kreitmair & Cho, Citation2017; Lupton, Citation2016a; Martens & Brown, Citation2018; Schulz, Citation2016; Toner, Citation2018). However, it has to be noted that the opposite effect has also been observed by some authors, as mentioned in the section on empowerment. We discuss this in more detail in the last section of the discussion.

Authors argue that self-tracking can reduce knowledge to numbers by framing data gathered through the devices as the ultimate source of information about the tracked phenomena (Daly, Citation2015; Kleinpeter, Citation2017; Kreitmair & Cho, Citation2017; Moore, Citation2017, pp. 9–12; Sharon, Citation2017, Citation2018; Sharon & Zandbergen, Citation2017; Selke, Citation2016b). Moreover, as knowledge is conflated with data, it is possible that only the easily quantifiable aspects of life will be reflected in the numbers presented by self-tracking devices, while relevant contextual information or more complex phenomena will remain out of sight (Gabriels & Coeckelbergh, Citation2019; Kleinpeter, Citation2017; Kreitmair & Cho, Citation2017; Lupton, Citation2016a; Moore, Citation2017, p. 10; Moore & Piwek, Citation2017; Morozov, Citation2013, p. 244; Owens & Cribb, Citation2019). As a result, users might receive a skewed depiction of the phenomena they are tracking. It is also possible that the focus on data will shift users’ attention away from reality and toward its representation in the form of data (Kreitmair, Citation2018; Moore & Robinson, Citation2016; Selke, Citation2016b). Such aspects of self-tracking devices are discussed in the literature under the terms datafication, dataism and data fetishism.

Interpretability of self-tracking data is likewise seen as having ethical relevance. Although data supplied by self-tracking devices is portrayed as objective and, consequently, not requiring interpretation (Lomborg et al., Citation2020), authors note that users need to put time and effort into making sense of their numbers (Lomborg et al., Citation2020; Neff & Nafus, Citation2016, p. 39). However, not everyone has the data literacy and general interpretative skills required to do so (Gabriels & Moerenhout, Citation2018; Li & Hopfgartner, Citation2016; Lomborg et al., Citation2020; Piwek et al., Citation2016; Selke, Citation2016b). While Selke (Citation2016b) notes that this can result in users internalizing the narratives surrounding self-tracking, other authors observe that misinterpretation of data can have more harmful effects when self-tracking devices are used in the domain of healthcare. They list these harmful effects of misinterpretation of data as inaccurate self-diagnosis (Gabriels & Moerenhout, Citation2018; Piwek et al., Citation2016), anxiety (Li & Hopfgartner, Citation2016; Lomborg et al., Citation2020) and unspecified dangers arising from medical decisions made on the basis of data (Li & Hopfgartner, Citation2016; Neff & Nafus, Citation2016, p. 143).

Negative impact on relation to self and others

Several authors observe that users can develop negative emotions through self-quantification when they become stressed or anxious about their results and their relation to what is considered normal or healthy (Baker, Citation2020; Gabriels & Moerenhout, Citation2018; Li & Hopfgartner, Citation2016; Lomborg et al., Citation2020; Lupton, Citation2013, Citation2015b, Citation2016a; Morozov, Citation2013, p. 228; Toner, Citation2018). Moreover, self-tracking technologies can instill feelings of insufficiency and failure, or, if users are already suffering from such feelings, self-tracking could lower their self-esteem even further (Baker, Citation2020; Borthwick et al., Citation2015; Lupton, Citation2013, Citation2015b, Citation2016a; Moore & Piwek, Citation2017; Owens & Cribb, Citation2019). As observed by Maturo and Setiffi (Citation2015), the use of self-tracking devices to monitor food intake could also lead people to developing disorders such as anorexia or exacerbate their existing issues with body image and diet. Psychological states such as anxiety can arise when users of self-tracking devices feel that their personal information might be disclosed to other people in great volume and in ways over which they have no control, such as in cases when self-tracking technologies are used in a workplace and workers’ personal data can be accessed by the management (Moore & Robinson, Citation2016; Moore, Citation2017, p. 86; Nissenbaum & Patterson, Citation2016; Oravec, Citation2020). Devices and apps which encourage competition and comparisons between users are seen as particularly likely to induce negative emotional states (Lupton, Citation2015b; Oravec, Citation2020).

Moreover, some authors argue that self-tracking promotes a radically individualistic outlook on life, which can be a form of narcissism or egocentrism (Cederström & Spicer, Citation2015, p. 107; Hill, Citation2019; Lupton, Citation2016a; Morozov, Citation2013, p. 233; Sharon, Citation2017). According to this view, self-quantification encourages users to focus on their own metrics in the quest for excellence, while ignoring relations with others as a field irrelevant to their personal pursuits. Even in cases where self-tracking is oriented toward others, the communal dimension of self-tracking can serve merely as a tool for achieving one’s aims. Shared data may be used in an instrumental way, without reciprocation on the part of the users, and others can be treated merely as a frame of reference or as competition (Gabriels & Coeckelbergh, Citation2019; Lupton, Citation2016b), for example, when people are “encountered” on leaderboards displayed in popular running apps. Moreover data discloses only characteristics that can be and have been quantified, while obscuring other relevant qualities, which leads to a limited view of others (Gabriels & Coeckelbergh, Citation2019). As noted by Klugman (Citation2018), self-tracking devices used by partners can reduce love to a series of metrics, which can have a significant impact on the nature of romantic relationships,

Relatedly, it has also been observed that self-tracking technologies used in the context of romantic relationships can reduce mutual trust between partners, especially when data supplied by the device is considered more credible than the partner’s own testimony (Martens & Brown, Citation2018), or when the data is used as a means of spying on the partner (Danaher et al., Citation2018b). Klugman et al. (Citation2018) observe a similar concern in cases of doctor-patient relationships, where reliance on self-tracking devices for monitoring treatment adherence would mean that a technical solution is used in place of a relation of trust developing between the patient and the medical practitioner. Similarly, they note that patients and doctors will often have to trust device manufacturers that devices are safe and kept up-to-date.

Finally, Gabriels and Coeckelbergh (Citation2019) argue that reliance on self-tracking devices might mean that the way individuals are perceived might be more and more mediated by data and algorithms. As a result, users could be left with less influence over the image of themselves they project to the world and might have to contend with an image presented by their data, possibly disseminated to others without the users’ knowledge.

Shortcomings of design

The lack of transparency associated with how self-tracking technologies are designed and used has been mentioned by authors as problematic, especially in connection with other issues such as privacy or the sale of data. It has been observed that technology developers do not disclose how their devices and algorithms operate (Baker, Citation2020; Challa et al., Citation2017; Crawford et al., Citation2015; Frank & Klincewicz, Citation2018; Klauser & Albrechtslund, Citation2014; Lanzing, Citation2016, Citation2019; Neff & Nafus, Citation2016, p. 63) or hide relevant information in ambiguously worded and needlessly complex privacy policies, terms of service documents and consent forms (Barassi, Citation2017; Neff & Nafus, Citation2016, p. 117; Klugman et al., Citation2018; Kreitmair & Cho, Citation2017; Danaher et al., Citation2018b). As a result, it is impossible for users to know how and why their data is used, as well as to give meaningful consent to these practices or control the flow of their data (Crawford et al., Citation2015; Frank & Klincewicz, Citation2018; Hoy, Citation2016; Klugman et al., Citation2018; Kreitmair & Cho, Citation2017; Lanzing, Citation2019). Moreover, Arora (Citation2019) argues that even if developers opted for transparency, users might still be unable to manage their personal information and fully understand how and why it is used by third parties. In his view, the complexity of these systems and the information they process means that diligent management of data would require a significant amount of time and specialized skills that users might simply not possess.

Furthermore, authors are concerned that self-tracking data can be highly unreliable, and there is often little evidence supporting the claims of the device manufacturers regarding the accuracy of the devices (Barta & Neff, Citation2016; Crawford et al., Citation2015; Gabriels & Moerenhout, Citation2018; Hoy, Citation2016; Kreitmair & Cho, Citation2017; Moore & Robinson, Citation2016; Oravec, Citation2020; Owens & Cribb, Citation2019; Piwek et al., Citation2016). This can be especially harmful when data is used in medical research (Oravec, Citation2020) or in health-monitoring (Gabriels & Moerenhout, Citation2018). Moreover, as noted by Klugman et al. (Citation2018) patients depending on self-tracking devices for the management of illness can be exposed to risk when devices fail. On this note, Piwek et al. (Citation2016) voice concerns that patients using self-tracking devices might develop a false sense of security, while being unaware of the devices’ unreliability and inaccuracy. In this sense, limited reliability of self-tracking devices can harm users’ health.

Finally, some authors note that algorithms can be designed and trained in a way which leads to bias against underprivileged groups (Ajana, Citation2017; Daly, Citation2015; Frank & Klincewicz, Citation2018; Hull, Citation2018; Lupton, Citation2016a; Sharon, Citation2017). In this sense, tracking often does not work as well for representatives of these groups as it might for those belonging to the privileged parts of the society. Vulnerable and minority users are also more often classified by the algorithms as at risk or abnormal. Frank and Klincewicz (Citation2018) and Lupton (Citation2016a) observe that self-tracking apps are often particularly biased in this way against women.

Negative impact on health perception

Some authors note how self-tracking can have adverse effects on users’ ideas about health. According to them, self-tracking promotes an obsession with health and fitness by encouraging users to track as many factors as possible at all times, to the point where healthiness becomes excessively desirable to them or even their primary concern (Fotopoulou & O’Riordan, Citation2017; Gabriels & Moerenhout, Citation2018; Li & Hopfgartner, Citation2016; Lupton, Citation2013; Richardson & Mackinnon, Citation2018; Selke, Citation2016b). This can have an additional effect of influencing users’ perception of ordinary behaviors and bodily functions as something connected with ill-health that has to be assessed in medical terms, often leading to a form of hypochondria (Baker, Citation2020; Gabriels & Moerenhout, Citation2018; Hull, Citation2018; Kreitmair & Cho, Citation2017; Li & Hopfgartner, Citation2016; Lupton, Citation2015b). In this sense, Baker (Citation2020) and Lupton (Citation2015b) both point out how fertility apps medicalize users’ understanding of menstruation and female fertility, but Kreitmair and Cho (Citation2017) note that self-tracking might lead to users worrying that even their mundane and harmless habits, such as sleeping in, might be connected to a hidden illness. On the other hand, Selke (Citation2016b) claims that in the discourse surrounding self-tracking, healthiness and fitness are equated with status and power.

Moreover, self-tracking can shift users’ attention to these aspects of health that are easily quantifiable, thus turning medicine into “a science of measurement” (Kleinpeter, Citation2017, p. 247) and promoting the idea that quantification is the best method for achieving good health (Lupton, Citation2013). This can have a significant impact on the role of medical professionals, who might be expected to spend more of their time on analyzing numbers supplied by self-tracking devices instead of providing face to face care to patients (Gabriels & Moerenhout, Citation2018; Kleinpeter, Citation2017; Klugman et al., Citation2018; Morgan, Citation2016). Issues related to duties and liability of parties involved in quantified medicine were also discussed in literature. It is an open question to what extent medical professionals can be held responsible for their interpretations of data and whether they can be blamed for missing a diagnosis by refusing to incorporate self-tracking data into their practice (Gabriels & Moerenhout, Citation2018; Klugman et al., Citation2018; Neff & Nafus, Citation2016). On the other hand, it is unclear how and to what extent users and providers of self-tracking devices can be responsible for self-tracking motivated health outcomes and decisions that have not been consulted with medical professionals (Kreitmair & Cho, Citation2017; Neff & Nafus, Citation2016).

Moreover, as noted by Klugman et al. (Citation2018), the use of self-tracking devices might limit the patients’ ability to lie to doctors about their behavior, which, while itself potentially morally questionable, might be well within the patients’ rights. Klugman et al. (Citation2018) also observe that the use of self-tracking devices might allow third parties, such as pharmaceutical companies, to scrutinize the practices of doctors in more detail and influence medical decisions (for example, by monitoring which medicine is prescribed and when).

Regulation and enforcement of rules

Ethical challenges associated with self-tracking technologies are further exacerbated by the relative lack of regulation of these technologies (Baker, Citation2020; Challa et al., Citation2017; Daly, Citation2015; Hoy, Citation2016; Kleinpeter, Citation2017; Lanzing, Citation2016; Maturo & Setiffi, Citation2015; Moore & Piwek, Citation2017; Nissenbaum & Patterson, Citation2016; Oravec, Citation2020). This can be particularly problematic when self-tracking devices are used for health-related monitoring. In many countries, health devices face additional regulatory scrutiny, but the classification of self-tracking devices as consumer technology excludes them from meeting requirements pertinent to medical devices (Baker, Citation2020; Kreitmair & Cho, Citation2017; Maturo & Setiffi, Citation2015; Nissenbaum & Patterson, Citation2016; Oravec, Citation2020) and allows technology-makers to make unsubstantiated claims about their efficacy (Hoy, Citation2016). This can have harmful impacts on patients depending on reliable readings connected to their health (Gabriels & Moerenhout, Citation2018), especially when they use these technologies without the oversight of medical professionals (Daly, Citation2015). Moreover, even if there are laws in place that could protect users of self-tracking technologies, government institutions might not always be prepared to enforce them or even be tasked do so (Challa et al., Citation2017).

There is also the problem of which regulations should apply. As data is shared with parties operating in different countries it might fall under different regulatory regimes, which might in turn limit protections granted to users and their data by law. This is particularly relevant as data is often transferred from users in the European Union to corporations operating in the United States, which have lower or even effectively non-existent standards for privacy protection (Kreitmair & Cho, Citation2017; Neff & Nafus, Citation2016, p. 132).

Relatedly, authors note that there are no established ethical standards for the use of self-tracking technologies in research (Gimbert & Lapointe, Citation2015; Kreitmair & Cho, Citation2017; Lomborg et al., Citation2020). According to Gimbert and Lapointe (Citation2015), this could be important in the context of dual-use research and its associated harms, although they do not provide any examples. Moreover, as Lomborg et al. (Citation2020) note, in qualitative studies which involve researchers interviewing participants about their own data, researchers are often entangled in the process of data interpretation and their input can have significant impact on how participants encounter results of their self-tracking practices and relate to themselves. And yet, as they argue, ethical approval may not be always required here. Another gap in ethical oversight was observed by Moore (Citation2017, p. 178) who notes that there are few organizational and professional codes of conduct that deal with best practices for using self-tracking data in a workplace environment and those that exist might not be sufficiently established. As a result, the kinds of decisions motivated by self-tracking data and their impact on employees may vary greatly depending on the employer and their own ethical standards.

Discussion

The contributions to the debate lack depth and consistency

It has to be noted that the current contributions to the discussion on the ethical aspects of self-tracking technologies lack depth. Even though the number of works discussed in this review is high and the findings cover a wide selection of ethical aspects of self-tracking technologies, the limitations of the existing work negatively factor into the overall quality of the discussion.

There has been relatively little philosophical and ethical attention devoted to the technologies that are the subject of this review. Most of the authors represent disciplines other than philosophy (and specifically ethics) and the content relevant to this review was often underdeveloped in their work. This can be well illustrated by reference to section on privacy and surveillance. Although it contains references to the second-most number of works (42), only a handful of them present an in-depth discussion of privacy, with most authors seemingly treating this issue as something self-explanatory or even obvious (i.e., they only mention that privacy is an issue in connection with self-tracking, without explaining why or what it entails). Overall, it should be noted that most of the issues discussed in this review were raised and elaborated upon in a relatively small number of papers.

In general, conceptual clarity is a significant limitation of the current discussion. It is unclear what terms are commonly accepted in the current debate to describe self-tracking technologies and what they entail (i.e., the extent and characteristics of specific types of tracking, or its differences to lifelogging). Similarly, even though some authors refer to the concept of neoliberalism in their discussion, they do not define the term and its implications, rather treating it as encompassing everything that is wrong with contemporary capitalism. Conceptual clarity is also lacking, for example, when discrimination is discussed. It is often unclear which of the examples discussed in the literature would constitute discrimination in the strict sense of the term and which are merely unfavorable or harmful outcomes against individuals that are not discriminatory in nature, because they do not target particular characteristics: employers firing diabetics are engaging in discrimination, but this is not the case when they use self-tracking data to determine which employees perform their duties poorly.

Consequently, a reader only examining the papers referenced in this literature review might be unable to develop a thorough understanding of ethical issues arising from the employment of self-tracking technologies. The current discussion would greatly benefit from more conceptual clarity as well as from more in-depth evaluation of the ethical aspects of self-tracking technologies that have already been identified in various works.

This problem is further evidenced by the fact that the diversity of the current work presenting ethical aspects of self-tracking technologies is limited from a methodological standpoint. Furthermore, even those authors who made their methodological and theoretical positions explicit were not always consistent in applying them throughout the work (e.g., with analysis from a specific standpoint appearing only in one section of the work) or in demonstrating how a given method or theory influenced their findings. Consequently, the current contributions to the literature on ethical aspects of self-tracking can be largely assessed as inconsistent or lacking from the methodological and theoretical standpoint. In our view, this inconsistency should at least partially account for the superficiality of some of the arguments found in the discussed works.

The relative popularity of Foucauldian approaches should be discussed in this context. Foucault’s notion of technologies of the self, extensively used by the authors discussed in this review, can indeed be an informative tool for the analysis of self-tracking practices. However, we argue that in many instances, authors extensively focused on the self-care and self-disciplining aspects of self-tracking sometimes at the cost of relatively overlooking the institutional and political context of self-tracking. In our view, despite such strong reliance on Foucault, the current literature does not adequately discuss self-tracking in biopolitical terms, and we see this as a promising avenue for future research. At the same time, the dominant Foucauldian methodology has some inherent shortcomings that need to be acknowledged in future work on self-tracking. By focusing on the notion of technologies of the self, authors might inadvertently attribute more reflectivity and agency to the users than might be in many cases warranted. The outsourcing of habit formation and reflection is arguably one of the major draws of self-tracking technologies and it is doubtful that a majority of the users picks up self-tracking in order to engage in attentive and in-depth practices of the self (instead hoping for guidance and helpful nudges). On the other hand, the Foucauldian notion of power as impersonal and distributed across numerous actors can preclude the reality of self-tracking as predominantly controlled by a handful of large corporations whose particular decisions can be easily identified and evaluated. In our view, Foucauldian approaches are a good tool for discussing the disciplining power of norms endorsed through self-tracking, but not the quasi-sovereign power of specific institutions and actors.

Returning to methodological inconsistency, it is often impossible to determine whether authors discuss actual use cases of self-tracking technologies or whether they base their analyses on marketing materials and the (as of yet unproven) claimed potential of self-tracking devices (especially as many authors do not provide references when they describe the discussed technologies or only refer to press articles). Authors who utilized ethnographic and sociological methods often treated the responses of their interviewees and the content of their observations at face value and uncritically depended on them in their own evaluations (e.g., by arriving at general conclusions on the basis of subjective experiences of individual users), while also drawing on marketing promises and existing discourse surrounding self-tracking when forwarding their own claims. Although the testimonies of the members of Quantified Self, interviews conducted with ordinary users and materials provided by the developers contain a wealth of information about self-tracking devices, they might not be applicable to the experiences of other users and might not be the best basis for arriving at general statements about self-tracking. Without clearer communication of the limitations and perspective of existing sources and data generated for the purposes of particular studies, it is impossible to determine the level of originality and accuracy of the contributions presented by the authors. This is particularly visible in the discussion of empowerment and well-being, as many of the claimed benefits have not been independently tested and critically examined. As a result, the authors’ findings can sometimes resemble a reiteration of Quantified Self talking points and marketing strategies of self-tracking companies.

Furthermore, the skeptical analyses of self-tracking often excessively depend on Morozov’s (Citation2013) early evaluation of the discussed technologies, without taking into account more empirical, nuanced and contemporary contributions to the debate provided by anthropologists, ethnographers and social scientists. Morozov’s highly critical work clearly influenced a significant part of the current discussion, even though some of the claims made in his book and later echoed in other publications are speculative in nature and have not been substantiated by reference to actual use cases of self-tracking technologies. While some of Morozov’s claims might be valid, future work on ethics of self-tracking should attempt to assess them in light of empirical evidence concerning the deployment of the discussed technologies in the real world.

Consequently, a more systematic and in-depth analysis of ethical aspects of self-tracking could be helpful in alleviating the shortcomings of the current state of the literature that result from the lack of depth and methodological consistency of the existing contributions. The current state of the debate does not allow researchers to arrive at an overarching picture of the ethics of self-tracking technologies, while the methodological inconsistency of individual contributions leads to difficulties when works are compared and their relevance is assessed.

The debate focuses on the present and lacks anticipation

While the current applications of self-tracking are well represented in the reviewed works, their authors do not engage in speculative and future-oriented discussion, which in the recent years has become an important part of ethical evaluation of new technologies thanks to the introduction of systematic frameworks for anticipatory assessment of technology (Brey, Citation2012; Floridi & Strait, Citation2020). At the time of writing, there is no recent in-depth academic study devoted to the future of self-tracking devices, and as the works discussed in this review do not provide predictions or include anticipatory elements, the current state of the literature does not make it possible to foresee the ethical issues that are the most likely to be associated with self-tracking in the next 5–10 years.

That said, we would argue that in the next 5–10 years self-tracking could be expected to remain on its current trajectory as the accuracy of the devices is likely to grow alongside the variety of metrics trackable through the sensors. The self-tracking market has been growing both in value and the number of users in the recent years, and this trend is expected to continue, especially in the health-related applications (Business Wire, Citation2020; Deloitte, Citation2017; Ramirez, Citation2013). Moreover, the market has attracted the attention of the largest technology companies and private investors, as evidenced, for example, by Google’s recent $2.1 billion acquisition of Fitbit (Chee, Citation2020; Gartenberg, Citation2019) and Amazon’s own spin on self-tracking through the introduction of the Halo Band (Bohn, Citation2020). As more companies engage in the competition, new devices also offer more accurate and cheaper ways to quantify an increased variety of metrics. Apple’s newest Apple Watches are claimed to measure users’ blood oxygen levels, with an as-of-yet unsubstantiated promise of eventually being able to detect COVID-19 infections on this basis (Apple, Citation2020). In turn, Amazon’s Halo Band offers users a way to quantify and evaluate their tone of voice, a development which has been criticized in the press as an example of tone policing and unwarranted surveillance (Fowler & Kelly, Citation2020).

Despite some media backlash and consumer skepticism associated with controversial features, the adoption rate of wearable devices should continue to increase. In the foreseeable future, the tracking of a greater scope of variables should become cheaper and require less effort while potentially providing higher accuracy, and consequently, the popularity of self-tracking devices among both the individual and institutional stakeholders should only be growing. This can be expected to have varying degrees of impact on the three major types of self-tracking projects: self-tracking for personal purposes, self-tracking for health and self-tracking in the workplace. Existing literature outlines only the current ethical aspects of these three kinds of self-tracking, without focusing on future ethical risks and opportunities that could already be anticipated. The use of self-tracking devices is even today often prescribed by medical professionals and mandated by management, and as self-quantification becomes more implicated in individuals’ health, work and leisure, its impact on users’ everyday life can be expected to increase. Consequently, the exclusive focus of the existing literature on the current applications of the technology is a significant limitation that will need to be addressed in future research.

Anticipatory analyses of self-tracking should extensively cover the potential unequal socioeconomic impact of the devices (i.e., wealthy users being able to reap more benefits of tracking), their future uses in the health sector and the workplace, their norm-prescribing effects (including the expectation that responsible individuals should track themselves in certain contexts), and others. Moreover, anticipatory work on self-tracking considering the impact of quantification on users’ self-knowledge should be particularly interesting, as quantitative ways of evaluating the self are a relatively new phenomenon and lie in stark contrast to the traditionally more popular qualitative (e.g., embodied, linguistic and intuitive) modes of self-assessment. While some of the discussion surrounding datafication covered the ways in which users can privilege numerical information over more traditional, subjective and embodied ways of generating knowledge about the self, the future epistemic shifts introduced by self-tracking merit greater scrutiny. The current literature extensively (even if not conclusively) discusses today’s most pressing epistemic concerns surrounding self-tracking, but it does not adequately present the plausible future shifts in self-evaluation that might take place as self-quantification becomes more commonplace.

At the same time, the literature accurately captures the existing breadth of application of self-tracking technologies, with purely personal use and fitness, healthcare and workplace monitoring all discussed in the reviewed articles. However, it is worth noting that a significant part of the current discussion on self-tracking is US-centric as authors commonly analyze self-tracking while assuming that users are customers of private insurance companies. This is evident in the volume of discussion devoted to the influence of users’ self-monitored fitness and health metrics on their individual insurance premiums, as well as in the concerns voiced over the impact of workers’ participation in workplace wellness schemes on the cost of insurance paid by the employer. It is worth remembering that such potential issues are much less likely to arise or be problematic to users from countries with public healthcare systems who do not have to depend on private companies or their employer to receive healthcare.

On the other hand, although Lupton’s (2016) distinction between different modes of self-tracking (i.e., private, pushed, communal, imposed and exploited) has received some attention from other scholars, the existing discussion on the current application of self-tracking could benefit from a closer analysis of the reasons why people engage in self-tracking and how stakeholders motivate or pressure them to take up the practice.Footnote12 At the time of writing, the majority of works do not discuss the potential motivations, incentives, obligations and expectations at play behind individual and organizational decisions to engage in or promote self-tracking. A person’s uninfluenced choice to wear a smart band and a worker’s membership in a wellness program involve different stakeholders and different contextually relevant factors that need to be considered to provide a thorough ethical analysis. As the potential scope of applications and the variety of interested stakeholders grow, the reasons for which users engage in self-tracking will only become more complicated and involve many more ethically relevant variables. It is worth adding that an anticipatory analysis of possible future use cases of self-tracking technologies conducted in reference to Lupton’s scheme of different modes of self-tracking could be of immense help to ethicists attempting to analyze the complicated landscape of self-quantification.

Certain issues remain unresolved and warrant more attention

The current debate presents many compelling insights, but it is still in early stages and there are many unresolved and underdiscussed ethical issues that warrant further attention of ethicists.

For example, while the discussion reconstructed in the section on empowerment and well-being would suggest significant positive impact of self-tracking on users’ epistemic self-confidence, well-being and decision-making capabilities, the findings presented in sections on autonomy, on datafication and interpretability of data and on negative impact on relation to self and others sketch a radically different image of users being alienated from their embodied perception, developing anxiety and self-doubt, and being nudged by the devices’ recommendation systems into decisions they might have not made otherwise, which can be problematic in light of the value accorded to individual autonomy. It could be argued that the positive discussion surrounding self-tracking is merely an echo of the unsubstantiated beliefs presented by the members of the Quantified Self movement or the marketing-oriented talking points of technology makers. However, the claims about benefits of tracking are not outlandish and their validity from the users’ subjective point of view has been well demonstrated by authors conducting ethnographic and sociological research, as evidenced by the section on empowerment and well-being. It seems that there are two opposite directions in which self-tracking can lead individual users, depending on their individual circumstances, their unique capabilities, and the goals motivating their tracking. More research is needed to establish the exact impact of self-tracking on users’ epistemic self-confidence and their agential capacity. Considering that the potential benefits and harms of self-tracking technologies vary highly across users and different use contexts, we would suggest that ethical frameworks providing contextual, situated and user-oriented analyses (such as virtue ethics) might be better suited for the evaluation of self-tracking than frameworks that employ high-level and abstract principles. Although the latter might have been designed to be universally applicable, the great variety of both beneficial and harmful effects that self-tracking can have on the users might not be accurately captured by reference to general principles such as autonomy and privacy.

Similarly, authors present conflicting conclusions relating to users’ capacity to develop and maintain new habits as a result of self-tracking. While the section on empowerment highlights the potential for users to gain more control over their habitual actions and develop individual plans for behavior change, the discussion reconstructed in the section on social harms could lead to a more pessimistic view of self-quantification as a practice influencing users to conform to the already existing social norms regardless of their individual circumstances. More research is required to determine which of these two possible outcomes is more likely and how to design and use self-tracking devices to maximize users’ benefit. Ethical frameworks focusing on habitual action and individual disposition toward specific kinds of behavior (such as virtue ethics and American pragmatism) should be useful in achieving this goal as they should allow researchers to examine various use cases of self-tracking technologies and accurately determine their likely effect in particular circumstances. Existing contributions to the literature do not devote much attention to the issue of habit-formation, only providing superficial statements about self-tracking’s potential to allow users more control over their everyday habits. The only exceptions were the book by Neff and Nafus (Citation2016) which provides a detailed ethnographic analysis of the ways in which habits can be developed through self-tracking, and the article by Toner (Citation2018), which presents habit-formation occurring through self-tracking as an example of nudging infringing on users’ autonomy.

The discussion surrounding community and solidarity has also proven inconclusive. Although influential authors such as Morozov (Citation2013) critique self-tracking for fostering an overly narcissistic view of the individual, the emergence of the Quantified Self movement and the data sharing that lies at the foundation of self-tracking practices show the potential of self-tracking technologies to foster solidarity and contribute to community formation. However, as demonstrated in the section on solidarity even the authors who develop a favorable view of the solidaristic dimension of self-tracking (Barta & Neff, Citation2016; Sharon, Citation2017) voice their reservations and point to the limited scope of self-tracking community or certain ambiguities inherent in self-tracking practices. Moreover, they do not define the concept of solidarity and do not elaborate upon obligations which this concept should entail. The closest attempt to do so is undertaken by Sharon (Citation2017), but even she contends that the kinds of solidarity found within self-tracking communities might not fit definitions found in philosophical literature.

Since the application of different understandings of solidarity could lead to drastically different assessment of public goods, risks and obligations arising within the practice, more work is needed to determine the nature and the extent of communal relations occurring within the self-tracking sphere. For example, the current debate on the solidaristic dimension of the sharing of self-tracking data could benefit from a discussion of obligations to share data and participate in self-tracking communities in light of the potential health and other benefits that are commonly associated with the practice.

Moreover, it should be noted that even when engaging in normative discussion, authors tend to describe particular problems but fail to arrive at recommendations for addressing them. Although this limitation of the existing literature can be observed in relation to any of the categories discussed in this review, we believe that it is best exemplified by the discussion regarding ownership and commodification of data. While authors agree that the ownership structure of self-tracked data is unclear and the common practice of companies selling user data to advertisers and other partners is at the very least ethically problematic, only Neff and Nafus (Citation2016) present an alternative way of thinking about data generated through self-tracking devices (i.e., by treating users and device manufacturers as “parents” of the data who do not own it, but have certain rights and responsibilities to their shared product). To address this limitation, the future normative discussion surrounding self-tracking technologies should contain more recommendations for their development and use.

Interestingly, as far as commodification of data is concerned, none of the authors develop an argument from the perspective of justice or fairness. These principles are not mentioned even by the authors engaging with ideas such as distribution of profits generated through self-tracking data or compensation for the digital labor in which the users take part (Lupton, Citation2016a; Schulz, Citation2016; Till, Citation2014). In general, authors raise social and political issues merely from a descriptive standpoint (as seen, for example, through recurring references to the concept of biopolitics) and they do not explicitly engage in normative debates about the politics of self-tracking technologies and practices. Consequently, when discussing politically contentious issues, such as, for example, the distribution of profits derived from self-tracking, authors do not take clear positions in the political debate and do not argue in favor of specific solutions to the problem.Footnote13 An analysis of these issues from the perspective of justice and fairness would be a significant contribution to the existing discussion and would provide a better basis for some intuitions already found in the literature. Moreover, a justice-oriented ethical framework should allow researchers to analyze some of the epistemic issues surrounding self-tracking (such as datafication) from the standpoint of epistemic justice (Fricker, Citation2007). Many of the problems identified in the section on social harms (e.g., reproduction and enforcement of existing social norms to the detriment of users who do not fit them) should also lend themselves well to a justice-based analysis.

Conclusions

This paper presents a systematic review of ethical aspects of self-tracking practices and technologies. As a result of a database search and backwards snowballing, a total number of 65 works has been extracted and analyzed for the purposes of this review. The ethical aspects discussed in the current literature have been divided into thirteen categories, with three dealing with the ethical opportunities of self-tracking (empowerment and well-being; contribution to health goals; solidarity) and further ten collecting concerns surrounding the discussed technologies (social harms; privacy and surveillance; ownership, control and commodification of data; autonomy; data-facilitated harm; datafication and interpretability of data; negative impact on relation to self and others; shortcomings of design; negative impact on health perception; regulation and enforcement of rules).

This review found that the existing contributions to the discussion of ethical aspects of self-tracking lack depth and conceptual clarity, do not systematically employ their leading methodologies, and are largely devoid of anticipatory analyses and recommendations. Most authors focus only on the current normative implications of self-tracking and do not analyze them in much detail or on the basis of well-defined theoretical and methodological tools. Moreover, when the present literature is analyzed as a whole, many of its findings prove inconclusive and several aspects appear insufficiently developed (e.g., discussion of self-tracking from the perspective of justice or solidarity, and recommendations for future development and use of self-tracking devices).

Considering the necessity of further ethical research on self-tracking technologies, it is possible to outline a clear research agenda that could help alleviate most of the shortcomings of the current state of debate. Future ethical research on self-tracking practices and technologies should strive to contribute to our understanding of self-quantification by focusing on questions of justice, solidarity, and the epistemic and behavioral impact of self-tracking (including habit formation), which merit greater scrutiny than what has been presented up to this point. A more systematic normative analysis of self-tracking is necessary in light of the inconclusive and often contradictory findings presented by the existing contributions to the literature, as well as the breadth of potential future applications of self-tracking in all walks of life.

Acknowledgments

At the time of submission, Michał Wieczorek received funding from the PROTECT project, which is supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the European Union’s Horizon 2020 programme [813497].

Notes

1 For example, a principle-based approach does not place as much emphasis on the character of stakeholders as virtue ethics does, whereas virtue ethics could be seen as privileging the point of view of the individual over the needs of a moral community. Also in our review, no single work contained references to issues belonging to all 13 categories we identified, let alone all of the issues within even just a single category.

2 Moreover, the advancement of AI technologies might mean that in the future, it will be possible to mine quantitative data from photos and audio files. In this sense, self-tracking might subsume parts of lifelogging by translating qualitative means of retention found in lifelogging into quantitative data points.

3 Scopus indexes over 16,000 peer-reviewed journals. It can be accessed at: https://www.scopus.com/

4 Web of Science covers over 10,000 peer-reviewed journals. It can be accessed at: https://www.webofknowledge.com/

5 It was selected as it provides access to over 7,000 peer reviewed journals. It can be accessed at: https://search.ebscohost.com/

6 PhilPapers was chosen to diversify search results by including more specialized philosophical literature selected by philosophical researchers that might not have been included in other databases. It can be accessed at: philpapers.org

7 Of course, neuroethics is a subfield of ethics rather than a distinct theoretical approach. However, this paper was the only work focusing on self-tracking from a neuroethical perspective and did not provide other information about methods or theories used within it. Consequently, we decided that labeling the paper as neuroethical in method would be more informative than classifying it in the “unknown” category.

8 The article in question presented a literature review dealing with the imposition of self-tracking in a healthcare context, not a literature review explicitly discussing ethical issues – it is largely descriptive, unlike the aforementioned literature review of ethical issues of lifelogging by Jacquemard et al. (Citation2014).

9 By new senses, Sharon and Zandbergen understand self-tracking devices’ sensors functioning as if they were extensions of users’ normal sensory apparatus.

10 Foucault defines technologies of the self as practices “which permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality.” (Foucault, Citation1988, p. 18). Examples of such technologies include the Stoic practices of self-reflexion and improvement, and the Christian practice of confession.

11 (Deleuze, Citation1992) argued that contemporary societies have shifted from disciplinary modes of maintaining social order (i.e., by keeping the citizens in check through the threat of punitive measures and the deployment of closed-off institutions such as schools, factories or prisons) to societies of control, which create more open-ended and fluid institutions for keeping people in check, replacing factories with corporations and prisons with electronic monitoring. Societies of control exert influence over their citizens, treated as mass instead of individuals, by means of surveillance, data collection, economic incentives and other indirect forms of control. However, it is not completely clear what the authors discussed in this review exactly mean by referring to the concept of control society as self-tracking deals with individuals rather than the masses. It is possible that these comments concern the aggregation of individuals’ data into larger data sets.

12 Arguably, many of the institutional practices of self-tracking more resemble surveillance rather than self-quantification done out of one’s own volition. However, the authors still consider applications such as workplace wellness and productivity monitoring schemes to fall under the term self-tracking (especially as workers do participate in the practice and often have some influence over it). We agree that the distinction between self-tracking and surveillance can be blurry in such contexts.

13 Even authors critically mentioning the neoliberal slant of self-tracking (see the section on social harms) predominantly focus on the description of the neoliberal politics of self-quantification, and do not explicitly debate these assumptions or promote alternative political positions that should be considered in the context of the discussed technologies.

References

  • Ajana, B. (2017). Digital health and the biopolitics of the Quantified Self. Digital Health, 3, 1–18. https://doi.org/10.1177/2055207616689509
  • Ajana, B. (2018a). Communal self-tracking: Data philantropy, solidarity and privacy. In B. Ajana (Ed.), Self-tracking: Empirical and philosophical investigations (pp. 125–141). Palgrave Macmillan. http://link.springer.com/10.1007/978-3-319-65379-2
  • Apple, Apple Watch Series 6 delivers breakthrough wellness and fitness capabilities [press release]. (2020, September 15). . https://www.apple.com/newsroom/2020/09/apple-watch-series-6-delivers-breakthrough-wellness-and-fitness-capabilities/
  • Aristotle. (2004). Nicomachean ethics ( R. Crisp, Trans.). Cambridge University Press.
  • Arora, C. (2019). Digital health fiduciaries: Protecting user privacy when sharing health data. Ethics and Information Technology, 21(3), 181–196. https://doi.org/10.1007/s10676-019-09499-x
  • Baker, D. A. (2020). Four ironies of self-quantification: Wearable technologies and the Quantified Self. Science & Engineering Ethics, 26(3), 1477–1498. https://doi.org/10.1007/s11948-020-00181-w
  • Barassi, V. (2017). BabyVeillance? Expecting parents, online surveillance and the cultural specificity of pregnancy apps. Social Media + Society, 3(2), 1–10. https://doi.org/10.1177/2056305117707188
  • Barta, K., & Neff, G. (2016). Technologies for sharing: Lessons from Quantified Self about the political economy of platforms. Information, Communication & Society, 19(4), 518–531. https://doi.org/10.1080/1369118X.2015.1118520
  • Bell, G., & Gemmell, J. (2009). Total recall: How the E-memory revolution will change everything. Penguin Books.
  • Bohn, D. (2020, August 27). Amazon announces Halo, a fitness band and app that scans your body and voice. The Verge. https://www.theverge.com/2020/8/27/21402493/amazon-halo-band-health-fitness-body-scan-tone-emotion-activity-sleep
  • Borthwick, A. C., Anderson, C. L., Finsness, E. S., & Foulger, T. S. (2015). Special article personal wearable technologies in education: Value or villain? Journal of Digital Learning in Teacher Education, 31(3), 85–92. Scopus. https://doi.org/10.1080/21532974.2015.1021982
  • Brey, P. A. E. (2012). Anticipatory ethics for emerging technologies. NanoEthics, 6(1), 1–13. https://doi.org/10.1007/s11569-012-0141-7
  • Business Wire. (2020). Worldwide wearables market forecast to maintain double-digit growth in 2020 and through 2024, according to IDC. https://www.businesswire.com/news/home/20200925005409/en/Worldwide-Wearables-Market-Forecast-to-Maintain-Double-Digit-Growth-in-2020-and-Through-2024-According-to-IDC
  • Cederström, C., & Spicer, A. (2015). The wellness syndrome. Polity.
  • Challa, N., Yu, S., & Kunchakarra, S. (2017). Wary about wearables: Potential for the exploitation of wearable health technology through employee discrimination and sales to third parties. Intersect, 10(3), 13. https://ojs.stanford.edu/ojs/index.php/intersect/article/view/1003.
  • Chee, F. Y. (2020, December 17). Google wins EU antitrust nod for $2.1 billion fitbit deal. Reuters. https://www.reuters.com/article/fitbit-m-a-alphabet-eu-idUSKBN28R1ZS
  • Crawford, K., Lingel, J., & Karppi, T. (2015). Our metrics, ourselves: A hundred years of self-tracking from the weight scale to the wrist wearable device. European Journal of Cultural Studies, 18(4–5), 479–496. https://doi.org/10.1177/1367549415584857
  • Daly, A. (2015). The law and ethics of `self-quantified’ health information: An Australian perspective. International Data Privacy Law, 5(2), 144–155. https://doi.org/10.1093/idpl/ipv001
  • Danaher, J., Nyholm, S., & Earp, B. D. (2018a). The benefits and risks of quantified relationship technologies: Response to open peer commentaries on “The quantified relationship.” The American Journal of Bioethics, 18(2), 3–6. https://doi.org/10.1080/15265161.2017.1409823
  • Danaher, J., Nyholm, S., & Earp, B. D. (2018b). The quantified relationship. The American Journal of Bioethics, 18(2), 3–19. https://doi.org/10.1080/15265161.2017.1409823
  • Deleuze, G. (1992). Postscript on the societies of control. October, 59 Winter 1992 , 3–7 https://www.jstor.org/stable/778828.
  • Deloitte. (2017). The future awakens: Life sciences and health predictions 2022. https://www2.deloitte.com/content/dam/Deloitte/cz/Documents/life-sciences-health-care/cz-lshc-predictions-2022.pdf
  • Duus, R., Cooray, M., & Page, N. C. (2018). Exploring human-tech hybridity at the intersection of extended cognition and distributed agency: A focus on self-tracking devices. Frontiers in Psychology, 9, 1432. https://doi.org/10.3389/fpsyg.2018.01432
  • Floridi, L., & Strait, A. (2020). Ethical foresight analysis: What it is and why it is needed? Minds and Machines, 30(1), 77–97. https://doi.org/10.1007/s11023-020-09521-y
  • Fotopoulou, A., & O’Riordan, K. (2017). Training to self-care: Fitness tracking, biopedagogy and the healthy consumer. Health Sociology Review, 26(1), 54–68. https://doi.org/10.1080/14461242.2016.1184582
  • Foucault, M. (1988). Technologies of the self. In L. H. Martin, H. Gutman, & P. H. Hutton (Eds.), Technologies of the self: A seminar with Michel Foucault (pp. 16–49). The University of Massachusetts Press.
  • Fowler, G. A., & Kelly, H. (2020, December 10). Review | Amazon’s new health band is the most invasive tech we’ve ever tested. Washington Post. https://www.washingtonpost.com/technology/2020/12/10/amazon-halo-band-review/
  • Frank, L., & Klincewicz, M. (2018). Swiping left on the quantified relationship: Exploring the potential soft impacts. The American Journal of Bioethics, 18(2), 27–28. https://doi.org/10.1080/15265161.2017.1409833
  • Franklin, B. (2005). The autobiography of Benjamin Franklin ( P. Conn, Ed.). University of Pennsylvania Press.
  • Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press.
  • Friesen, N. (2017). Confessional technologies of the self: From Seneca to social media. First Monday, 22(6). Article 6. https://doi.org/10.5210/fm.v22i6.6750
  • Gabriels, K., & Moerenhout, T. (2018). Exploring entertainment medicine and professionalization of self-care: Interview study among doctors on the potential effects of digital self-tracking. Journal of Medical Internet Research, 20(1), e10. https://doi.org/10.2196/jmir.8040
  • Gabriels, K., & Coeckelbergh, M. (2019). ‘Technologies of the self and other’: How self-tracking technologies also shape the other. Journal of Information, Communication and Ethics in Society, 17(2), 119–127. https://doi.org/10.1108/JICES-12-2018-0094
  • Gartenberg, C. (2019, November 1). Google buys fitbit for $2.1 billion. The Verge. https://www.theverge.com/2019/11/1/20943318/google-fitbit-acquisition-fitness-tracker-announcement
  • Gertenbach, L., & Mönkeberg, S. (2016). Lifelogging and vital normalism: Sociological reflections on the cultural impact of the reconfiguration of body and self Selke, Stefan. In Lifelogging: Digital self-tracking and lifelogging—Between disruptive technology and cultural transformation (pp. 25–42). Springer VS; Scopus. https://doi.org/10.1007/978-3-658-13137-1_2
  • Gimbert, C., & Lapointe, F.-J. (2015). Self-tracking the microbiome: Where do we go from here? Microbiome, 3(1), Article 70. https://doi.org/10.1186/s40168-015-0138-x
  • Ha, D. (2017). Scripts and re-scriptings of self-tracking technologies: Health and labor in an age of hyper-connectivity. Asia Pacific Journal of Health Law & Ethics, 10(3), 67–86.
  • Heehs, P. (2013). Writing the self: Diaries, memoirs, and the history of the self. Bloomsbury Academic.
  • Hill, D. W. (2019). Speed and pessimism: Moral experience in the work of Paul Virilio. Journal for Cultural Research, 23(4), 411–424. https://doi.org/10.1080/14797585.2020.1716141
  • Hoy, M. B. (2016). Personal activity trackers and the Quantified Self. Medical Reference Services Quarterly, 35(1), 94–100. a9h. https://doi.org/10.1080/02763869.2016.1117300
  • Hull, G. (2018). The politics of quantified relationships. The American Journal of Bioethics, 18(2), 29–30. https://doi.org/10.1080/15265161.2017.1409831
  • Jacquemard, T., Novitzky, P., O’Brolchain, F., Smeaton, A. F., & Gordijn, B. (2014). Challenges and opportunities of lifelog technologies: A literature review and critical analysis. Science and Engineering Ethics, 20(2), 379–409. SPRINGER. https://doi.org/10.1007/s11948-013-9456-1
  • Jalali, S., & Wohlin, C. (2012). Systematic literature studies: Database searches vs. backward snowballing. In Proceedings of the ACM-IEEE international symposium on empirical software engineering and measurement - ESEM ’12. Association for Computing Machinery (ACM). (p. 29). https://doi.org/10.1145/2372251.2372257
  • Kahrass, H., Borry, P., Gastmans, C., Ives, J., Graaf van der, R., Strech, D., & Mertz, M. (2021). PRISMA-ethics – Reporting guideline for systematic reviews on ethics literature: Development, explanations and examples [ OSF Preprints]. https://doi.org/10.31219/osf.io/g5kfb
  • Klauser, F. R., & Albrechtslund, A. (2014). From self-tracking to smart urban infrastructures: Towards an interdisciplinary research agenda on big data. Surveillance and Society, 12(2), 273–286. Scopus. https://doi.org/10.24908/ss.v12i2.4605
  • Kleinpeter, E. (2017). Four ethical issues of “E-health.” IRBM, 38(5), 245–249. https://doi.org/10.1016/j.irbm.2017.07.006
  • Klugman, C., Dunn, L. B., Schwartz, J., & Cohen, I. G. (2018). The ethics of smart pills and self-acting devices: Autonomy, truth-telling, and trust at the dawn of digital medicine. The American Journal of Bioethics, 18(9), 38–47. https://doi.org/10.1080/15265161.2018.1498933
  • Klugman, C. (2018). I, my love, and apps. The American Journal of Bioethics, 18(2), 1–2. https://doi.org/10.1080/15265161.2018.1423793
  • Kreitmair, K., & Cho, M. K. (2017). The neuroethical future of wearable and mobile health technology. In J. Illes (Ed.), Neuroethics: Anticipating the future (pp. 80–107). Oxford University Press; Scopus. https://doi.org/10.1093/oso/9780198786832.003.0005
  • Kreitmair, K. (2018). Phenomenological considerations of sex tracking technology. The American Journal of Bioethics, 18(2), 31–33. https://doi.org/10.1080/15265161.2017.1409842
  • Kristensen, D. B., & Ruckenstein, M. (2018). Co-Evolving with self-tracking technologies. New Media & Society, 20(10), 3624–3640. https://doi.org/10.1177/1461444818755650
  • Lanzing, M. (2016). The transparent self. Ethics and Information Technology, 18(1), 9–16. https://doi.org/10.1007/s10676-016-9396-y
  • Lanzing, M. (2019). “Strongly recommended” revisiting decisional privacy to judge hypernudging in self-tracking technologies. Philosophy & Technology, 32(3), 549–568. https://doi.org/10.1007/s13347-018-0316-4
  • Li, N., & Hopfgartner, F. (2016). To log or not to log? SWOT analysis of self-tracking. In S. Selke (Ed.), Lifelogging: Digital self-tracking and lifelogging—Between disruptive technology and cultural transformation (pp. 305–325). Springer VS.
  • Lifkova, A. (2019). Digital power: Self-tracking technologies through Michel Foucault lens. Politické Vedy, 22(4), 81–101. https://doi.org/10.24040/politickevedy.2019.22.4.81-101
  • Lomborg, S., Langstrup, H., & Andersen, T. O. (2020). Interpretation as luxury: Heart patients living with data doubt, hope, and anxiety. Big Data & Society, 7(1), 1–13. https://doi.org/10.1177/2053951720924436
  • Lupton, D. (2013). Quantifying the body: Monitoring and measuring health in the age of mHealth technologies. Critical Public Health, 23(4), 393–403. https://doi.org/10.1080/09581596.2013.794931
  • Lupton, D. (2015a). Lively data, social fitness and biovalue: The intersections of health self-tracking and social media. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2666324
  • Lupton, D. (2015b). Quantified sex: A critical analysis of sexual and reproductive self-tracking using apps. Culture, Health & Sexuality, 17(4), 440–453. https://doi.org/10.1080/13691058.2014.920528
  • Lupton, D. (2016a). Quantified Self. Polity Press.
  • Lupton, D. (2016b). The diverse domains of quantified selves: Self-tracking modes and dataveillance. Economy and Society, 45(1), 101–122. https://doi.org/10.1080/03085147.2016.1143726
  • Lupton, D., & Smith, G. J. D. (2018). A much better person’: The agential capacities of self-tracking practices. In B. Ajana (Ed.), Metric culture (pp. 57–75). Emerald Publishing Limited.
  • Martens, H., & Brown, T. E. (2018). Relational autonomy and the quantified relationship. The American Journal of Bioethics, 18(2), 39–40. https://doi.org/10.1080/15265161.2017.1409835
  • Maturo, A., & Setiffi, F. (2015). The gamification of risk: How health apps foster self-confidence and why this is not enough. Health, Risk & Society, 17(7–8), 477–494. https://doi.org/10.1080/13698575.2015.1136599
  • Mertz, M., Kahrass, H., & Strech, D. (2016). Current state of ethics literature synthesis: A systematic review of reviews. BMC Medicine, 14(1), 152. https://doi.org/10.1186/s12916-016-0688-1
  • Mertz, M., Strech, D., & Kahrass, H. (2017). What methods do reviews of normative ethics literature use for search, selection, analysis, and synthesis? In-depth results from a systematic review of reviews. Systematic Reviews, 6(1), 261. https://doi.org/10.1186/s13643-017-0661-x
  • Moore, P., & Robinson, A. (2016). The Quantified Self: What counts in the neoliberal workplace. New Media & Society, 18(11), 2774–2792. https://doi.org/10.1177/1461444815604328
  • Moore, P. (2017). The Quantified Self in precarity: Work, technology and what counts in the Neoliberal workplace (1st ed.). Routledge. https://doi.org/10.4324/9781315561523
  • Moore, P., & Piwek, L. (2017). Regulating wellbeing in the brave new quantified workplace. Employee Relations, 39(3), 308–316. https://doi.org/10.1108/ER-06-2016-0126
  • Morgan, H. (2016). ‘Pushed’ self-tracking using digital technologies for chronic health condition management: A critical interpretive synthesis. Digital Health, 2 January 2016 , 1–41. https://doi.org/10.1177/2055207616678498
  • Morozov, E. (2013). To save everything, click here: Technology, solutionism, and the urge to fix problems that don’t exist. PublicAffairs.
  • Neff, G., & Nafus, D. (2016). Self-tracking. The MIT Press.
  • Nissenbaum, H., & Patterson, H. (2016). Biosensing in context: Health privacy in a connected world. In D. Nafus (Ed.), Quantified: Biosensing technologies in everyday life (pp. 79–100). The MIT Press.
  • Oravec, J. A. (2020). Digital iatrogenesis and workplace marginalization: Some ethical issues involving self-tracking medical technologies. Information, Communication & Society, 23(14), 2030–2046. https://doi.org/10.1080/1369118X.2020.1718178
  • Owens, J., & Cribb, A. (2019). My fitbit thinks I can do better!’ Do health promoting wearable technologies support personal autonomy? Philosophy and Technology, 32(1), 23–38. https://doi.org/10.1007/s13347-017-0266-2 Scopus
  • Piwek, L., Ellis, D. A., Andrews, S., & Joinson, A. (2016). The rise of consumer health wearables: Promises and barriers. PLOS Medicine, 13(2), e1001953. https://doi.org/10.1371/journal.pmed.1001953
  • Ramirez, E. (2013, January 27). Pew Internet research: 21% self-track with technology. Quantified Self. https://quantifiedself.com/blog/pew-internet-research-the-state-of-self-tracking/
  • Richardson, S., & Mackinnon, D. (2018). Becoming your own device: Self-tracking challenges in the workplace. Canadian Journal of Sociology, 43(3), 265–290. Scopus. https://doi.org/10.29173/cjs28974
  • Sanders, R. (2017). Self-tracking in the digital era: Biopower, patriarchy, and the new biometric body projects. Body & Society, 23(1), 36–63. https://doi.org/10.1177/1357034X16660366
  • Schulz, P. (2016). Lifelogging: A project of liberation or a source of reification. In S. Selke (Ed.), Lifelogging: Digital self-tracking and lifelogging—between disruptive technology and cultural transformation (pp. 43–59). Springer VS.
  • Selke, S. (2016a). Introduction. In S. Selke (Ed.), Lifelogging: Digital self-tracking and lifelogging—Between disruptive technology and cultural transformation (pp. 1–21). Springer VS.
  • Selke, S. (2016b). Rational discrimination and lifelogging: The expansion of the combat zone and the new taxonomy of the social. In S. Selke (Ed.), Lifelogging: Digital self-tracking and lifelogging—Between disruptive technology and cultural transformation (pp. 345–372). Springer VS.
  • Sharon, T., & Zandbergen, D. (2017). From data fetishism to quantifying selves: Self-tracking practices and the other values of data. New Media & Society, 19(11), 1695–1709. https://doi.org/10.1177/1461444816636090
  • Sharon, T. (2017). Self-tracking for health and the Quantified Self: Re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philosophy & Technology, 30(1), 93–121. https://doi.org/10.1007/s13347-016-0215-5
  • Sharon, T. (2018). Let’s move beyond critique—but please, let’s not depoliticize the debate. The American Journal of Bioethics, 18(2), 20–22. https://doi.org/10.1080/15265161.2017.1409836
  • Sofaer, N., & Strech, D. (2012). The need for systematic reviews of reasons. Bioethics, 26(6), 315–328. https://doi.org/10.1111/j.1467-8519.2011.01858.x
  • Swirsky, E. S., & Boyd, A. D. (2018). Love in the time of quantified relationships. The American Journal of Bioethics, 18(2), 35–37. https://doi.org/10.1080/15265161.2017.1409828
  • Till, C. (2014). Exercise as labour: Quantified Self and the transformation of exercise into labour. Societies, 4(3), 1–17. https://doi.org/10.3390/soc4030446
  • Till, C. (2018). Commercialising bodies: Action, subjectivity and the new corporate health ethic. In R. Lynch & C. Farrington (Eds.), Quantified lives and vital data: Exploring health and technology through personal medical devices (pp. 229–249). Palgrave Macmillan. https://doi.org/10.1057/978-1-349-95235-9_10
  • Toner, J. (2018). Exploring the dark-side of fitness trackers: Normalization, objectification and the anaesthetisation of human experience. Performance Enhancement and Health, 6(2), 75–81. Scopus. https://doi.org/10.1016/j.peh.2018.06.001
  • Wohlin, C. (2014). Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th international conference on evaluation and assessment in software engineering (EASE'14). (pp. 1–10). https://doi.org/10.1145/2601248.2601268