4,070
Views
22
CrossRef citations to date
0
Altmetric
Commentaries

A perspective on city dashboards

Dashboards have long been instruments that display the operation of a system in real time, originally mechanical in nature but increasingly being based on digital monitors that sense how a machine system is performing. We are all familiar with dashboards in cars as instrumented panels that were originally analogue in construction and are fast becoming digital, currently being a combination of both. As computers have become ever more pervasive, dashboards have emerged that do not necessarily relate to an underlying mechanical system but can record information in real time that ultimately might relate to virtual operations. Dashboards usually display information available in real time and are especially suited to routine, continuous measurements that pertain to the underlying system. With the advent of digital computers, information about physical operations could be converted into digital forms and the very devices that were used to monitor the operation of digital computers themselves were the first systems to be used for such measurements. These were often visual being based on oscilloscopes that displayed the performance of the machine, but they were quickly adopted for rudimentary computer graphics.

The idea of monitoring human systems is intrinsic to modern medicine, but monitoring human organizations with respect to the behaviour of individuals has only became significant since computers became all pervasive during the computer revolution, which began in the early 1980s. In fact, the first dashboards for managing organizations predated contemporary computers whose screens use part of their memory, and the systems that emerged in the late 1960s and early 1970s were essentially portals to hand-collected information – some of it displayed in near real time, a little after its collection using teletype-like devices. The most iconic example was the control centre called Cybersyn developed by the cyberneticist Stafford Beer for the left-wing Allende government which came to power in Chile in 1970 (Morozov, Citation2014). Beer essentially argued that a dashboard-like control room for monitoring the Chilean economy should be set up and plans were made to make this operational using a system of data collection that involved manually harvesting data about the performance of various industries using telex machines. These were developed in a simple form of analogue communications network, but not from any digitally networked set of sources – for in 1970 the notion that computer and communications would converge was in its infancy. The system thus required a fair bit of manual intervention with workers actually monitoring the data daily and physically ensuring it was transmitted by telex. Needless to say, like many of the Soviet attempts at managing such a command economy, the data were always out of sync, notwithstanding the fact that the theories behind the way the economy worked were primitive and often just plainly wrong. Eden Medina (Medina, Citation2011) wonderful book on the Chilean project shows just how prescient, how misguided and how impossible it was to fashion a dashboard with its mirror on human organization prior to the age when computers became synonymous with communications.

Kitchin, Lauriault, and McArdle’s (Citation2014) article brings all this up to date with respect to control centres that might be used to manage the city and the panoply of indicators that can be constructed from different data sources. Their thesis is largely about the argument that the very attempts to see management in this way are flawed by the fact that those in control rarely recognize that the data they collect are socially constructed and massaged in countless ways before they ever get to the point where they are potentially useful. They emphasize a warning that the present obsession with dashboards and control centres often forgets the role of socially constructed data and the argument is one of restraint in employing such portals in the rush to take on board ideas about the smart city.

To an extent, the most rudimentary of dashboards applicable to displaying the routine operation of the city do collect data in real time that are comparatively neutral in their factual complexion. That is, real-time data feeds that exist in many domains of the city predominantly from the operation of various infrastructure and the monitoring of natural events but increasingly supplemented by the use of hand-held devices measuring human reactions power such dashboards. Our own example built by Oliver O’Brien in CASA at UCL essentially collects what can be easily retrieved from the applications programming interfaces (APIs) that make such data available in real time, and arranges these data in useful ways. Viewers of the dashboard can thus visually compare and combine aspects of the city from the weather to the operation of transit systems to ‘what is trending on Twitter’ so that they can assemble some synthesis of the city’s performance. This assumes, of course, that the data being fed into these dashboards in real time are comparable in some way and are factually correct. How the data are displayed and then used is, of course, a social activity and there are no guarantees that such a view of the city is of any use: interesting perhaps but how useful for specific tasks is debatable. In fact our own dashboard was not constructed primarily as a tool for letting city dwellers learn about a city’s performance (although this is a side product) but as a proof of concept: showing what might be possible as a demonstrator of how one might proceed.

More considered and scaled-up dashboards akin to Beers’ Cybersyn are appearing in large cities with the one in Rio de Janeiro, Brazil – pictured in Kitchin et al.’s (Citation2014) figure 1 – and built by IBM one of the best known. This is geared to emergency management and one assumes that the data being fed in real time do provide a basis for command and control of the emergency services. Even this can be problematic and, of course, such forums are not new. For many years, particularly in the United States, police departments have used such real-time information in their daily briefing, and now with the proliferation of hand-held devices this kind of briefing function is being decentralized into the field. In fact, traffic control centres in big cities use a variety of manually collected data, online feeds from cameras and loop counters, as well as visual inspection to ensure the smooth operation of routine traffic, with manual override still being a major feature of such interfaces in the management of the city’s routine operations like traffic.

The closer such interfaces are to data produced in real time from fixed or mobile sensors (which include hand-held devices), the less structure it has but there are still social consequences in deciding what data to collect, and this conditions applications. In general, as data become more abstracted over space and time and this means the introduction of structure into the initial data with respect to the way they are aggregated or classified, the more useful the data in that their organization usually reflects a purpose for which the data are to be used. Clearly as Kitchin et al. (Citation2014) imply, if data are being used to measure the performance of a city, then this is strongly conditioned by purpose and any dashboard or control centre that involves displaying data processed for some purpose is subject to all the biases and preconditions that users bring to bear on this process. In fact, the nearer the data are to their raw source as in the dashboard we in CASA have developed – shown in Kitchin et al. (Citation2014), figure 2 (see http://www.citydashboard.org/) – the less useful it is to measure city performance.

In our dashboard, each individual source has its own interpretation by any user examining how the data change over time: the weather, disruption on the underground (tube), road congestion from street cameras, the state of the FTSE 100, and so on, are all data that an individual might respond to intuitively without employing any powerful analytics to interpret these. Overall it might be possible to get some sense of what is happening in a city and how desirable or otherwise this is from a general interpretation of data feeds from the dashboard. But these only really come into their own once analysis of the data begins, and this invariably involves some aggregation of the data to some purpose. For example, the Amsterdam city dashboard includes a variety of data on population, social class and density as well as pollution, twitter feeds etc. which mixes data over longer periods than the sort of immediacy that we have in the CASA dashboard (see http://citydashboard.waag.org/). In fact the analytics associated with this dashboard are quite impressive in a visual sense and there is also map capability that allows users to plot data (from census sources, for example), thus learning about the wider context of the city in terms of its districts and neighbourhoods.

In short, measuring and interpreting city performance in any individual or synthetic way from these kinds of portal requires aggregation to some purpose, a move away from real-time monitoring of performance to some more abstracted interpretation of how the state of a city is changing over the longer term. More powerful analytics with respect to how the user can interrogate and interpret the data are clearly required and as Kitchin et al. (Citation2014) imply, progress in this domain has so far been limited. In fact, because there appears to be little structure in how dashboards and control centres are being constructed with respect to the different windows they have on a world in real time or some aggregation thereof, one could be forgiven for drawing the conclusion that it is largely hype and spin that dominates their use so far. Of course, it is important to react to emergencies and to assemble data rapidly on what is happening, but to place it alongside everything else in dashboard or control centre begs the question as to why? It is well known that despite the missives from the large computer companies such as IBM and Cisco, integrating diverse data are extremely difficult, often impossible because there are no common keys, because the data are in inconsistent formats, because of noise, missing data and so on (Batty et al., Citation2012). What is needed urgently is some considered evaluation of what is happening in this domain, an ongoing evaluation as soon as such instruments are introduced. Use and relevance need to be established, for only through this will such portals be improved and made useful in the routine management of cities and the assessment of their performance. The need for such assessments is not in doubt. There is, however, a long way to go, as Kitchin et al. (Citation2014) imply, when it comes to developing the new technologies for how we might do this.

References

  • Batty, M. , Axhausen, K. , Fosca, G. , Pozdnoukhov, A. , Bazzani, A. , Wachowicz, M. , Ouzounis, G. , & Portugali, Y. (2012). Smart cities of the future. European Physical Journal Special Topics , 214 , 481–518.10.1140/epjst/e2012-01703-3
  • Kitchin, R. , Lauriault, T. , & McArdle, G. (2014). Knowing and governing cities through urban indicators, city benchmarking, and real-time dashboards, Regional Science, Regional Studies , doi:10.1080/21681376.2014.983149
  • Medina, E. (2011). Cybernetic revolutionaries: Technology and politics in Allende’s Chile . Cambridge, MA: The MIT Press.
  • Morozov, E. (2014). The planning machine: Project cybersyn and the origins of the big data nation, The New Yorker, October 13 at http://www.newyorker.com/magazine/2014/10/13/planning-machine