Publication Cover
Journal of Medicine and Philosophy
A Forum for Bioethics and Philosophy of Medicine
Volume 32, 2007 - Issue 3
3,180
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

Nano-Technology and Privacy: On Continuous Surveillance Outside the Panopticon

&
Pages 283-297 | Published online: 27 Jun 2007

Abstract

We argue that nano-technology in the form of invisible tags, sensors, and Radio Frequency Identity Chips (RFIDs) will give rise to privacy issues that are in two ways different from the traditional privacy issues of the last decades. One, they will not exclusively revolve around the idea of centralization of surveillance and concentration of power, as the metaphor of the Panopticon suggests, but will be about constant observation at decentralized levels. Two, privacy concerns may not exclusively be about constraining information flows but also about designing of materials and nano-artifacts such as chips and tags. We begin by presenting a framework for structuring the current debates on privacy, and then present our arguments.

I. INTRODUCTION

Privacy is one of the major moral issues that will be discussed in connection with the development and applications of nano-technology (CitationGutierrez, 2004; CitationMehta, 2003). Privacy has already been an important theme in our thinking about information technology in the last decades (CitationVan den Hoven, 2005). Nano-technology incorporates and integrates different technologies including information technology. Invisible Radio Frequency Identification chips (RFIDs), integrated circuits, tags, nano-dust, minute (bio) sensors, recordable clothing and intelligent fabrics, films, and smart surfaces will find their way to the worlds of retail, supply chains, health care, logistics, shops and warehouses, criminal justice and security. Hence, by these information-technological applications, nano-technology will give rise to a panoply of privacy issues.

In this paper we begin by presenting a framework for structuring the current debates on privacy in the context of information technology. This framework provides a taxonomy of moral reasons for data protection and it makes explicit the current view that the privacy of persons is primarily related to the content and processing of information about those persons.Footnote 1

Then we turn to nano-technology and explore whether this technology introduces new problems and calls for new points of view. We argue that nano-technology will enable surreptitious in situ information gathering in local and transient databases, in addition to the more classical information flows to and from permanent centralized databases. The emergence of these new types of information processing will shift the primary focus in privacy debates from constraints on the processing of the information itself and the management of the databases in which it is stored, to constraints on the design of nano-artifacts which enable these flows of information. In a world in which nano-technology is applied, privacy of persons will therefore become more and more related to the information processing properties of the nano-artifacts and the information conductivity of designer materials that surround persons with ambient intelligence.

Although the setting, background, and topics of privacy debates may change as a result of the introduction and use of nano-technology, the moral significance of the protection of personal information will remain. In the following section we provide an account of privacy's lasting importance.

The moral reasons we provide here for the protection of personal information in general should and can be used to evaluate the implications of surveillance technology with submicron and nano-electronics components. Without a clear conceptual framework and a taxonomy of moral reasons it is difficult to be clear about moral requirements for the design of a new generation of surveillance devices. Engineering design cannot be done on the basis of vague, nondescript notions such as “the personal life sphere,” or “privacy.”Footnote 2

II. PRIVACY'S IMPORTANCE

Laws, policies, and regulations to protect the personal sphere and the privacy of persons have been formulated and implemented in the last 100 years around the world, but not without debate and controversy. Privacy has been the subject of much academic discussion (CitationDeCew, 1997; CitationNissenbaum, 2004; CitationRoessler, 2005; CitationShoeman, 1984). Different authors have presented different accounts of privacy, but the majority of them agree that privacy is important in human lives. Privacy laws and regulations define constraints on the processing of personal information. In Europe the main moral principle in the area of personal data ― familiar in medicine and medical ethics ― is the principle of informed consent (CitationEU, 1995). Before personal data can be processed, informed consent from the data subject is required. Although there are many different accounts of privacy, the following taxonomy of moral reasons for the justification of protection of personal information captures many of them and has the advantage of turning the privacy discussion into a tractable problem:

  1. prevention of information-based harm,

  2. prevention of informational inequality,

  3. prevention of informational injustice, and

  4. respect for moral autonomy.

In claiming privacy we do not want to be ‘left alone” or to be “private”, but we want more concretely to prevent others from harming us, treating us unfairly, discriminating us, or making assumptions about who we are.

III. INFORMATION-BASED HARM

The first type of moral reason for data protection is concerned with the prevention of harm, more specifically harm that is done to persons by making use of personal information about them. The fact that personal information is used to inflict harm or cause serious disadvantages to individuals does not necessarily make such uses violations of a moral right to privacy.

Criminals are known to have used databases and the Internet to get information on their victims in order to prepare and stage their crimes. The most important moral problem with “identity theft” for example is the risk of financial and physical damages. One's bank account may get plundered and one's credit reports may be irreversible tainted so as to exclude one from future financial benefits and services. Stalkers and rapists have used the Internet and on-line databases to track down their victims. They could not have done what they did without tapping into these resources.

In an information society there is a new vulnerability to information-based harm. The prevention of information-based harm provides government with the strongest possible justification for limiting the freedom of individual citizens to find out about each other. No other moral principle than John Stuart Mill's harm principle is needed to justify limitations of the freedom of persons who cause, threaten to cause, or are likely to cause, information-based harms to people. Protecting personal information, instead of leaving it in the open, diminishes the likelihood that people will come to harm, analogous to the way in which restricting the access to fire arms diminishes the likelihood that people will get shot in the street. We know that if we do not establish a legal regime that somehow constrains citizens' access to weapons, the likelihood that innocent people will get shot increases. In information societies, personal information is comparable to guns and ammunition. We should act accordingly.

IV. INFORMATIONAL EQUALITY

The second type of moral reason to justify data protection is concerned with equality and fairness. More and more people are keenly aware of the benefits a market for personal data can provide. If a consumer buys coffee at the shopping mall, information about that transaction can be generated and stored. Many consumers realize that every time they come to the counter to buy something, they can also sell something, namely, information about their purchase or transaction, so-called “transactional data.” Likewise, sharing information about ourselves on the Internet with web sites, browsers, and autonomous agents may pay off in terms of more and more adequate information (or discounts and convenience) later. Many privacy concerns have been and will be resolved in quid pro quo practices and private contracts about the use and secondary use of personal data.

But although a market mechanism for trading personal data seems to be kicking in on a global scale, not all individual consumers are aware of this economic opportunity, and if they do, they are not always trading their data in a transparent and fair market environment. Moreover, they do not always know what the implications are of what they are consenting to when they sign a contract. We simply cannot assume that the conditions of the developing market for personal data guarantee fair transactions by independent standards. Data-protection laws can help to guarantee equality and a fair market for personal data. Data-protection laws in these types of cases typically protect individual citizens by requiring openness, transparency, participation, and notification on the part of business firms and direct marketers to secure fair contracts.

V. INFORMATIONAL INJUSTICE

A third and important moral reason to justify the protection of personal data is concerned with justice in a sense which is associated with the work of the political philosopher Michael CitationWalzer (1983). Walzer has objected to the simplicity of Rawls' conception of primary goods and universal rules of distributive justice by pointing out that “there is no set of basic goods across all moral and material worlds, or they would have to be so abstract that they would be of little use in thinking about particular distributions.”

Goods have no natural meaning, their meaning is the result of socio-cultural construction and interpretation. To determine what is a just distribution of the good we have to determine what it means to those for whom it is a good. In the medical, the political, and the commercial spheres there are different goods — medical treatment, political office, and money — which are allocated by means of different allocation or distributive practices: medical treatment on the basis of need, political office on the basis of desert, and money on the basis of free exchange. What ought to be prevented, and often is prevented as a matter of fact, is the dominance of particular goods. Walzer calls a good dominant if the individuals that have it, because they have it, can command a wide range of other goods.

A monopoly is a way of controlling certain social goods in order to exploit their dominance. In that case advantages in one sphere can be converted as a matter of course into advantages in other spheres. This happens when money (commercial sphere) could buy you a vote (political sphere) and would give you preferential treatment in healthcare (medical sphere), would get you a university degree (educational sphere), etc. We resist the dominance of money — and other social goods for that matter (land, physical strength) — and think that political arrangements allowing for it are unjust. No social good x should be distributed to men and women who possess some other good y merely because they possess y and without regard to the meaning of x.

What is especially offensive to our sense of justice, Walzer argues, is the allocation of goods internal to sphere A on the basis of the distributive logic or the allocation scheme associated with sphere B, second, the transfer of goods across the boundaries of separate spheres and thirdly, the dominance and tyranny of some goods over others. To prevent this, the “art of separation” of spheres has to be practiced and “blocked exchanges” between them have to be put in place. If the art of separation is effectively practiced and the autonomy of the spheres of justice is guaranteed, then “complex equality” is established. One's status in terms of the holdings and properties in one sphere are irrelevant — ceteris paribus — to the distribution of the goods internal to another sphere.

Walzer's analysis also applies to information (CitationNissenbaum, 2004; CitationVan den Hoven, 1998; CitationVan den Hoven & Cushman, 1996). The meaning and value of information is local, and allocation schemes and local practices that distribute access to information should accommodate local meaning and should therefore be associated with specific spheres. Many people do not object to the use of their personal medical data for medical purposes, whether these are directly related to their own personal health affairs, to those of their family, perhaps even to their community or the world population at large, as long as they can be absolutely certain that the only use that is made of it is to cure people from diseases.

They do object, however, to their medical data being used to disadvantage them socio-economically, to discriminate against them in the workplace, to refuse them commercial services, to deny them social benefits, or to turn them down for mortgages or political offices. They do not mind if their library search data are used to provide them with better library services, but they do mind if these data are used to criticize their tastes and character. They would also object to these informational cross-contaminations when they benefit from them, as when the librarian advises them of a book on low-fat meals on the basis of knowledge of their medical record and cholesterol values, or a doctor poses questions on the basis of the information that one has borrowed a book from the public library about AIDS.

We may thus distinguish another form of informational wrongdoing: “informational injustice,” that is, disrespect for the boundaries of what we may refer to, following Walzer, as “spheres of justice” or “spheres of access.” What is often seen as a violation of privacy is often more adequately construed as the morally inappropriate transfer of data across the boundaries of what we intuitively think of as separate “spheres of justice” or “spheres of access.”

VI. RESPECT FOR MORAL AUTONOMY

Some philosophical theories of privacy account for its importance in terms of moral autonomy, i.e., the capacity to shape our own moral biographies, to reflect on our moral careers, to evaluate and identify with our own moral choices, without the critical gaze, interference of others and a pressure to conform to the “normal” or socially desired identities. This idea of privacy provides us with a bridging concept between the privacy notion and a liberalist conception of the self. Privacy, conceived along these lines, would only provide protection to the individual in his quality of a moral person engaged in self-definition and self-improvement against the normative pressures which public opinions and moral judgments exert on the person to conform to a socially desired identity. Information about Bill, whether fully accurate or not, facilitates the formation of judgments about Bill. Judgments about Bill, when Bill learns about them, or suspects or fears that they are made, may bring about a change in his view of himself, may induce him to behave or think differently than he would have done without these judgments.

To modern contingent individuals, who have cast aside the ideas of historical and religious necessity, living in a highly volatile socio-economic environment, and a great diversity of audiences and settings before which they make their appearance, the fixation of one's moral identity by means of the judgments of others is felt as an obstacle to “experiments in living” as Mill called them. The modern liberal individual wants to be able to determine himself morally or to undo his previous determinations, on the basis of more profuse experiences in life, or additional factual information. Data-protection laws provide the individual with the leeway to do just that.

This conception of the person as being morally autonomous, as being the author and experimenter of his or her own moral career, provides a justification for protecting his personal data. Data-protection laws thus provide protection against the fixation of one's moral identity by other than oneself and convey to citizens that they are morally autonomous.

A further explanation for the importance of respect for moral autonomy may be provided along the following lines. Factual knowledge of another person is always knowledge by description. The person him- or herself, however, does not only know the facts of his biography by description, but also is the only person who is acquainted with the associated thoughts, desires and aspirations. However, detailed and elaborate our files and profiles on a person may be, we are never able to refer to the data-subject as he himself is able to do. We may only approximate his knowledge and self-understanding.

Bernard Williams has pointed out that respecting a person involves “identification” in a very special sense, which could be referred to as “moral identification” (CitationVan den Hoven, 1998):

(…) in professional relations and the world of work, a man operates, and his activities come up for criticism, under a variety of professional or technical titles, such as ‘miner or ‘agricultural labourer’ or ‘junior executive.’ The technical or professional attitude is that which regards the man solely under that title, the human approach that which regards him as a man who has that title (among others), willingly, unwillingly, through lack of alternatives, with pride, etc. (…) each man is owed an effort at identification: that he should not be regarded as the surface to which a certain label can be applied, but one should try to see the world (including the label) from his point of view. (CitationWilliams, 1973, p. 236)

Moral identification thus presupposes knowledge of the point of view of the data-subject and a concern with what it is for a person to live that life. Persons have aspirations, higher order evaluations, and attitudes and they see the things they do in a certain light. Representation of this aspect of persons seems exactly what is missing when personal data are piled up in our databases and persons are represented in administrative procedures. The identifications made on the basis of our data fall short of respecting the individual person, because they will never match the identity as it is experienced by the data-subject. It fails because it does not conceive of the other on her terms. Respect for privacy of persons can thus be seen to have a distinctly epistemic dimension. It represents an acknowledgment that it is impossible to really know other persons as they know and experience themselves.

VII. PANOPTICISM AND DATA PROTECTION

As becomes clear from the discussion of the moral reasons for data protection in the previous section, two assumptions have played a central role in our thinking about privacy thus far. First, the availability of personal information and data are what makes persons vulnerable. Data and information about persons are thus what needs to be protected, and hence data and information themselves should be the focus of our moral, legal and technical interventions. Secondly, it is the ability to use personal data and information across different social spheres and sectors, in a coordinated way, and accumulatively which is instrumental in creating these vulnerabilities. This is realized in an epistemic position that transcends local and partial perspectives and allows for transgression of local boundaries and lifts epistemic barriers. We refer to these assumptions in traditional thinking about informational privacy as respectively “data protection” and “panopticism:”

  1. Panopticism. Information technology enables radical and new forms of centralization of surveillance, monitoring, and power. Bentham's Panopticon is used to illustrate this argument, and the image of Big Brother is often used as a more recent image of centralized power. Panopticism is associated with the potential to exploit and harm people, to take advantage of them or to discriminate against them, or to impose identities on them from an authoritative point of view.

  2. Data protection. Privacy should be protected first and foremost by means of the protection of data and information about persons; that is, by appropriately constraining the acquisition, processing, use, and dissemination of personal data and information.

By analyzing these assumptions in more detail, we argue that developments in nano-technology may relativize their validity rather than vindicate it as has been suggested in the literature.

VIII. PANOPTICISM

Jeremy Bentham's idea of the ideal prison design has been implemented. Prisons have been built in accordance with the idea of a Panopticon, a hemispherical building with a central view point in the middle. The Panopticon is a transparent construction that allows for unimpeded oversight, and, at the same time, by its design conveys effectively to prisoners that these features are present. Bentham suggested that the mere assumption on the part of inmates that they were always monitored, even if in fact they were not, would constrain them in the required ways.

Ever since Michel CitationFoucault (1977) discussed Bentham's idea of the Panopticon as the ideal prison design in the context of his study on punishment and surveillance, it has shaped the discussions about privacy. In the 1970s and 1980s it was the fear of panopticism, mixed with the image of government as Big Brother, the idea of the main frame computer and central databases, that lead to the adoption of strong privacy laws in Europe in order to prevent the centralization of monitoring, surveillance, and power at the expense of the freedom and power of individual citizens.

Two aspects of the idea of panopticism should be distinguished. First, the spatial aspect lies in the fact that there is one central place where it all comes together and all perspectives merge. In the dome shaped prison, the central position allows one single observer to see everything. Let us refer to this aspect of panopticism as its synopticism. Secondly there is the temporal aspect that people are continuously visible, let us refer to it as the continuity aspect.

Recently, the idea of “nano-panopticism” was introduced by Michael D. CitationMehta (2003) to shed light upon some of the prominent ethical implications of nano-technology. The use of the metaphor of the Panopticon, without making the distinction between the continuity and the synoptic aspects, may lead one to think that it is omnipotent centralized surveillance which should bother us primarily. Nano-technology can no doubt in due course play its part in the building of a societal panopticon. Extremely small or invisible sensors may be used to feed information into a central database. However, in the next decades, the tagging and RFID technology will also enable radical decentralized use of personal data.

The core technology of this type of tracking, tracing, and surveillance is the widely used Radio Frequency Identity Chip (RFID). An RFID chip or tag consists of a small integrated circuit attached to a tiny radio antenna, which can receive and transmit a radio signal. The storage capacity of the chip can be up to 128 bits. The chip can either supply its own energy (active tag) from a battery or get its energy input from a radio signal from the antenna of the reader (passive tag). Like in the case of bar codes, there is an international number organization that provides and registers the unique ID numbers of RFID chipsFootnote 3 RFID is ideally suited for the tracking and tracing of objects such as boxes, containers and vehicles in logistic chains. RFID tags are now also being used to trace and track consumer products and everyday objects on the item level as a replacement of barcodes. Governments and the global business world are preparing for a large-scale implementation of RFID technology in the first decades of the 21st century for these purposes.

Apart from a race to the bottom and the aim of making RFIDs smaller, one of the research challenges is to make the chips self-sufficient and energy saving, or even energy “scavenging,” in which case they will get energy from their environment in the form of heat, light or movement. The other challenge is to make them cheaper. One way to lower the unit cost of RFID chips is to find mass applications such as chipping bank notes, which the EU is considering.Footnote 4 With RFID each object has its own unique identifier and individuals will — apart from being walking repositories of biometric data — become entangled in an “Internet of things.”Footnote 5 RFID foreshadows what nano-electronics has in store for our privacy: invisible surveillance.

When one, however, does make the distinction between the continuity and the synoptic aspects of the panopticism, one may argue that the sting in the nano-privacy debate may not be synopticism but its continuity. The introduction of fabrics, films, and new materials and surfaces that assist in recording, storing and giving off information to the environment at close range and at a distance, constantly and systematically, may serve centralized data collection and processing, but may also accommodate highly context-dependent and local information needs. The invisibility and the ubiquity add to the stealth character of these applications.

A noteworthy aspect of this development may thus be the constant and systematic, local, and distributed informational relations of an individual with its environment. Carpets could log footsteps and the soles of your shoes could produce a report at the end of the day of where you have been. Furthermore, and perhaps more importantly, the architecture of new nano-technology will obviously not be that of a dome-shaped prison which is easy to recognize as such and which clearly signals and symbolizes centralized control and monitoring of those subject to its regime. It will involve many invisible ubiquitous and ambient applications, with new and unexpected properties to users that can be used as instruments of surveillance, surreptitious tracking and tracing. This surveillance will be continuous but need not be synoptic, setting people free from the centralized dome, but not outside the reach of decentralized control.

IX. DATA PROTECTION

The second way in which the privacy debates associated with nano-technology may be different from our present ones is that they shift our focus away from information and data. Privacy debates have quite naturally focused on information and on constraining its use and dissemination. The last two decades it was emphasized that the information society was not about atoms, old economy, and traditional factors of production, it was about bits and bytes, about the knowledge economy and about information. Existing legal and privacy regimes all and almost exclusively apply to data and personal information. Data protection is about “data,” particularly personal data. No data, no need for data protection; no personal information, no need for informational privacy.Footnote 6

It is difficult to accommodate new and ever-changing hardware that will be used to collect information on individuals and hence to evaluate the legal evidence that consists of personal data collected by means of such new technology, but an early awareness surfaces in Kyllo v. United States where the focus was primarily on the technology itself and not so much on the information it was used to generate. It was argued that the defendant could not have known about the new remote sensing techniques that were used to detect cannabis growing in the home (in Kyllo v. United States, 533 US 27 (2001)).

Nano-technology will provide us with many more examples of new sensor devices and thus will take privacy discussions to the level of the design of new artifacts, materials, surfaces, properties of artifacts, and fabrics. This will require a broadening of our thinking about privacy. It should also cover the design of the artifacts that help to generate information. The privacy debate has moved from atoms (doors, brick walls, and curtains), to bits and bytes (information technology), only to veer back again to the level of atoms (nano-technology). Avoine and Oechslin (2004) argued that it is not sufficient to discuss data protection at the level of the application or at the level of the communication; they argue that also the physical level needs to be looked at.

X. AN EXPLORATIVE DIGRESSION

Imagine a shopping center, a mall or a big department store. The visitors can be divided in the usual categories. There are customers, some with a clear intention to buy something, others just looking around. Some customers are ready to spend considerable sums of money; others are just looking for bargains. And then, of course, there are those who intend to steal something or are about to cause trouble. The staff of stores is aware of these categories of customers and tries to anticipate dealing with them adequately, which may involve showing the latest mink fur coats, engaging in a polite conversation, or to notifying security.

Let us now also imagine that the technology of labeling products would radically change and all clothes were to be equipped with invisible and readable data carriers. Producers of those clothes write product information on them, and stores add the price and also security codes to set off the alarm when thieves try to leave the store. This would be a nice and flexible system that makes a whole series of actions superfluous. Customers need not remove sticky tags with codes from their freshly bought items, and stores can add, read, and remove all kinds of information with a simple scanning device. In the case of food the whole logistic chain from production to consumption could be logged and retrieved. Some speculate that a capitalist wonderland could be envisaged where buying will be fully automated: consumers walk into shops, take what they need, and just walk out again. With such a system in place, it seems but a small step to let staff in stores become aware of the information customers carry around.A customer who just bought an expensive piece of jewelry in a department store around Christmas time, and then moves to the toy section, can be expected to buy something expensive again. The staff at the toy department would just have to scan the bag of the costumer discretely to have interesting sales information.

The next step in this train of events is that the staff of the store may actively tag people that visit the store. After some years, most people may be wearing clothes that are equipped with those readable data carriers. Security may start tagging those who attempted to steal something; a measure to keep the thugs out of the store is now easily carried out by making their trainers activate the store's alarm on reentering. Staff may continue tagging regular customers to let colleagues know how to best anticipate the behavior of the costumers concerned. And in this way people are transformed to carriers of their own profiles and the possibility of being used as such. When scanned, it is revealed whether or not they bought expensive products, whether or not they easily decided on their purchases, and whether or not they are welcome in the store.

XI. PRIVACY AND DATA CARRIERS

The scenario sketched in the previous section will most probably be seen as a breach of the visitors' privacy. One may try to capture this breach in terms of panopticism and data protection, and argue that the tagging of agents in the sketched scenario introduces information about these agents that make those agents susceptible to control, in ways which go against their interests; that they may be harmed in several ways, e.g., price-targeted or statistically discriminated against. A first reaction could be that the visitors should be given the means to have access to that tagged information and to control its content. Moreover, one could require that staff of shops should be constrained in tagging and collecting information. But this approach focuses primarily on the results of the tagging and retrieving, and may even shift responsibility to the agent whose privacy is breached; especially if the visitor is able to control the data tagged on him or her, this visitor becomes the one who determines which data are available. The scenario sketched here is not one of synopticism; the department store is not necessarily keeping a centralized database or accessing a database with information about visitors. This first reaction does not address the underlying fact that writable data carriers are inserted into the clothes of the agents.

When clothes are equipped with writable data carriers, information is not yet available. To deal with these problems adequately it seems that we need to shift attention to the design and the use of the artifacts involved, cast in terms of writable information carriers, which are attached to agents. A privacy account that is adequate in the age of nano-surveillance should be able to accommodate designing and equipping artifacts with readable and writable tags, at the stage when there is no personal data yet.

Mobile phones and RFID tags already provide us with examples of how this could work today. Celebrities have been photographed in the company of their mistresses. Up-skirt photography is raising concerns in Japan, as embarrassing pictures of girlfriends in high schools are placed on the Internet. In Japan the cameras of mobile phones therefore need to be designed in such a way that either by a flash or a sound bystanders are warned that pictures are being taken. Like the beep on a truck that signals to bystanders that it is going in reverse.

These and other developments foreshadow what nano-technology has in store for us in the next decades. Consumers have good moral reasons to want to keep control over the sensing properties of their environments, how their identities are construed locally and how they are perceived, in stores, in hospitals, and in the street, for reasons outlined above. They may fear harm, unfair treatment, discrimination, or may feel unfree to behave as they themselves think is most appropriate. But they need to engage information technology upstream, when it is technology, and there is not yet information.

One could also argue that the first wrongdoing lies in the fact that tagging and sensing practices may fail to respect the autonomy of the subject, and of his or her right of self-determination and may thus violate the requirement of informed consent. Or one could argue that economically valuable information is read off, preempting the choice of consumers to trade or sell that information themselves. Or one could argue that thieves could read off what valuables customers were carrying around and target their victims.

Our privacy framework provides moral guidance to designing; designers could, for instance, give users the means to detect tags or to “kill” the tags applied, scramble the signal from non-authorized reading devices, or limit the range in which their tags can be read. The focus of attention should not merely be on the information that is generated, stored, and reused by RFID tags, but also on the fact that with those tags, users may feel actively monitored from all sides, and may feel that their moral autonomy to project their moral identity is compromised. The focus should thus be widened to decentralized data flows and to control of users over the surveillance technology they carry.

XII. CONCLUSION

If nano-technology will, among other things, amount to inserting (writable) data carriers in clothes, utensils and other artifacts, agents have to become aware that in the vicinity of these items, they can always be subject of surveillance. Cell phones make it easy to locate their owners; clothes may make them tagable. These intrusions into the privacy of agents call for rethinking the role of centralized information and panoptic surveillance as the central and organizing privacy concept. Having control over information and data may no longer be the sole and primary target for agents to hold on to their privacy; controlling the data gadgets — and shaping their design — embedded in products may become equally important for agents.

Typically privacy is about information and in a normative sense it refers to a non-absolute moral right of persons to have direct or indirect control over access to

  1. information about oneself,

  2. situations where others could acquire information about oneself.

  3. We propose that privacy should also be taken to refer to technology and or artifacts that can be used to process personal data.

Not only database specialists and ICT professionals, security, and cryptographers should think about privacy in the future, but also nano-technologists, designers of material and fabrics, supply chain managers, and retail people will have to think in terms of privacy designs. They will have to worry about how existing nano-technology can be made visible, can be detected and neutralized, or be read.

People will have to become aware of the fact that when they buy clothing they also buy a writable memory stick. It changes the conditions under which people consent and intend things. Actions like “Putting on a coat,” “carrying a gift out of a shop,” or “to drive from A to B” are no longer what they appear to be. What actually happens is that one buys a gift and one lets the store know which route one followed through the shop, and “driving from A to B” has become “driving from A to B and registering in the vehicle registration system.”

Much of the ingenuity from the side of lawyers and all who are professionally concerned with privacy, has gone to designing institutions, legal frameworks, codes of conduct, but if we are correct in drawing attention to the material side of things here, then it is not the information and personal data that should be our exclusive concern in thinking about the implications of nano-technology, but the fact that certain nano-particles, films and devices as carriers of information, with certain physical properties, will be widely used.

Notes

1. This is a so-called tort law or informational construal of privacy, which needs to be distinguished from constitutional privacy. EU data protection laws are primarily about informational privacy. See CitationSandel (1995) for a brief and very lucid account of the distinction between informational and constitutional (or decisional) privacy.

2. The conceptual confusion concerning privacy has been often observed. Many studies and articles on privacy start with the observation that privacy is extremely difficult to explicate and define. Judith CitationThomson (1975) stated that “perhaps the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is.” CitationPosner (1978) observed that the concept is elusive and ill defined.

3. EPCglobal, URL: www.epcglobalinc.org, accessed May 17, 2007.

4. Yoshida, Euro bank notes to embed RFID Chips by 2005, EETimes, 19.12.2001, available at: http://www.eetimes.com/story/OEG20011219S0016, accessed May 17, 2007.

5. This is the Title of a study of the International Telecom Union; www.itu.int/internetofthings, accessed May 17, 2007.

6. We leave out of consideration here issues of so-called “constitutional or decisional privacy,” which comprises issues in US law that are concerned with the freedom of choice, e.g., regarding abortion and sexual behavior.

Van den Hoven, M.J., & Cushman, R. (1996). Privacy, health care data and information technology. The Journal of Information, Law and Technology, 3. Available: http://elj.warwick.ac.uk/elj/jilt/confs/3privacy/. New citation as at 1/1/04: http://www2.warwick.ac.uk/fac/soc/law/elj/jilt/1996_3/hoven/, accessed May 17, 2007.

REFERENCES

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.