836
Views
1
CrossRef citations to date
0
Altmetric
Editorial

Computing in this world

This thematic issue of Interdisciplinary Science Reviews, titled ‘Computing in the World’, stems from the Fifth International Conference on the History and Philosophy of Computing (HaPoC), which took place in the end of October 2019 in Bergamo, Italy, a few months before that town and the surrounding province became one of the regions hit the hardest in the world by the first wave of the COVID-19 pandemic.

The main idea behind this issue, conceived before the coronavirus changed so many aspects of our lives, was that computing has become ubiquitous in many parts of the world and in most if not all fields of endeavour. In the tradition of this journal, the editors were looking for articles in the physical, social and human sciences that would show genuinely meaningful interdisciplinarity, which (at the intersection with the context of the HaPoC conference) is supposed to emerge from the use of computing machines in historical and philosophical research or from historical and philosophical research about computing machines.

The selection process, after the call for papers closed, took place while the pandemic was gradually putting most of the world in lockdown. A fear rose that in such exceptional circumstances, the ending of which was nowhere in sight, the main theme of the issue would not be so interesting anymore, dwarfed by all the consequences of an unprecedented planetary disaster. It turned out that such fear could not have been further from the truth.

Computing and the relevant machinery have taken a prominent role in supporting many of the activities that have been disrupted by the pandemic. We have been heavily relying on computers, smartphones and telecommunication infrastructures to keep work and sociality going in ways that were compatible with the constraints imposed by the circumstances. It has also been recognized that the record-breaking speed at which vaccines for the coronavirus were developed was aided by the ease with which scientists all over the world could exchange data on the virus’s genome and experimental results over digital networks.

The benefits of these technological enhancements are undeniable. However, even a simple event like a video-chat can provide a chance for reflection on the intrinsic limitations of a computational approach and its non-computational ramifications.

The success of a computational solution to a real-world problem depends on whether such solution, which is essentially a collection of digits, can be transformed into a real-world phenomenon that solves the original problem. In the case of a video-chat, the computational solution is good, but not perfect because of the constraints imposed by limits in technology: there is no computational description of tactile sensations, and thus, while in quarantine, we can see and hear our loved ones over the Internet but not touch them. In an era dominated by a contagious disease, when touch is considered risky behaviour, computation is not able to help us overcome our touch-hunger.

Another less immediately perceivable but no less significant class of constraints is at work when we use computation in our lives. These are all the non-computational factors which ensure that computational devices function properly. We can roughly divide these factors into two categories that parallel the traditional distinction in computer science between software and hardware: on the one hand, there is the need for common rules for digital data creation and transformation so that interoperability between different computing machines is ensured; on the other hand, electrical power, cables, antennas, transmitters and receivers need to be installed and operated for those data to be exchanged. Although they support computation, all these factors are not computational: some are physical and some others are political.

The material machinery that constitutes hardware for computation is constrained by the laws of physics. For instance, the design and construction of telecommunication networks are influenced, among other things, by the phenomenon of signal loss over cable lengths. This is a universal issue: anything that exists in physical reality is subject to the laws of physics, so it does not characterize computational devices in a particular way.

Software is a more abstract entity, constituted by the description of the configurations that the hardware must assume to perform computation. The physical constraints that guide the design of hardware indirectly affect software as well, because software cannot include descriptions of configurations from which the hardware is precluded. However, much freedom is granted to software designers, since there is no physical law constraining the language in which they choose to write those descriptions. For instance, in HTML, the language used to instruct a browser on what a webpage is supposed to look like, we use angular brackets to create tags (e.g. <TITLE>) that categorize different parts of the content of a page. The choice of angular brackets over, for instance, square brackets was a decision freely taken by members of a working group.

Arguing about what brackets to use in a language may seem pedestrian, but it sheds light on the nature of the political constraints that characterize computing in the world: computing machines are able to work and to work with each other because decisions were taken by a number of people, and the rest of the world has complied with what prescribed by those decisions.

If we enlarge the scope of computational tools to human endeavours like geography, for example, criticalities immediately emerge. A Geographic Information System (GIS) is a type of software for gathering, managing and analysing geographic data in the form of layers visualized on top of maps or inside three-dimensional scenes. A GIS organizes data to be visualized in a way that is analogous to what a browser does with webpages and the relevant language, used to express geographical features, is called Geographic Markup Language (GML) and it was defined and is maintained by the Open Geospatial Consortium (OGC). The widespread adoption of GIS technology around the world and the fact that GIS systems are based on GML mean that the OGC has substantial control over how geographical data are expressed and treated in most computing machines.

This is not an issue of brackets anymore, but of conceptual frameworks through which data collected in the real world and about the real world get transformed into digits, stored, elaborated, transmitted, shown on screens and then interpreted by geographers. When data are visualized on a computer, they are not direct observations filtered just through a screen, but the result of several processes of digitization, encoding, tagging, classification and visualization, all governed by a framework established by a consortium whose 8 strategic partners are either North American or European.Footnote1

Whether the digitization of research in geography by means of GIS systems can reflect the multicultural diversity that characterizes the various regions geographers might intend to investigate remains to be seen. What is certain is that the ultimate decisions on how to conceptualize computational geographic data are in the hands of a very small number of countries, and any attempt at creating an alternative approach will face the daunting task of designing new computational tools and negotiating their adoption with a critical mass that has already settled for a widespread solution.

There are no technological constraints here: it is a matter of who holds the power to determine the success of some computational models over others in tackling some aspects of the world.

The COVID-19 pandemic brought all these issues to centre stage: not all aspects of reality can be treated by means of computational tools; for those aspects for which a computational treatment is possible, digitization requires a number of arbitrary decisions by the people in charge of the process; often, such decisions are influenced by power relations and end up maintaining those relations.

In circumstances where the chasm between a messy, concrete reality of health and economic crises and a neat, computational environment of digital models and data is more visible than ever, we are called to study the role of computation in the world from at least two different perspectives: what computation can (and cannot) do inside a discipline, in terms of modelling of phenomena, creation of digital data, negotiation about the meaning of such data and the relation between new computational processes and traditional methods; and what computation can (and cannot) do outside a discipline, in terms of distribution and discussion of research results to larger communities and policy makers, knowledge exchange with other disciplines, possibly on the basis of common computational models or by building computation-based conceptual bridges, to name a few possible endeavours.

This is a very ambitious and vast program, and this is where the historical and philosophical approaches dear to the HaPoC community come to the rescue by providing two useful frames of reference. I am neither a historian nor a philosopher but a computer scientist, so I can merely point out the shortcomings of my discipline and indicate what kind of help I grew up to expect from historians and philosophers when computation fails. Computation, for instance, can never provide the context and the circumstances in which some data were collected. We can analyse them and find all sorts of interesting and unexpected correlations among them, but data do not say much about themselves and we need the support of a historical investigation to complete the picture. Moreover, computation can help us perfect the way we reason by formalizing every passage in our discourse in terms of inference rules, but it is completely agnostic about the premises from which our discourses kick off. To understand what premises are the correct ones or whether correct premises even exist is a very stimulating philosophical endeavour.

In today’s world, where everything, from science to politics, from the most complex medical hypotheses to even the most blatant facts is questioned, we feel the urge not only to understand things but also to make them better. The goal of this thematic issue of Interdisciplinary Science Reviews is to initiate a discourse on the role, both in terms of enhancements and limitations, that computation had and will have in such an effort in a variety of disciplines. My gratitude goes to all the authors for sharing their work in this direction.

Notes

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.