1,650
Views
1
CrossRef citations to date
0
Altmetric
Articles

Pedagogic encounters with algorithmic system controversies: a toolkit for democratising technology

ORCID Icon, ORCID Icon & ORCID Icon
Pages 226-239 | Received 28 Jun 2022, Accepted 16 Feb 2023, Published online: 16 Mar 2023

ABSTRACT

There is a broad impetus across policy and institutional domains to expand public engagement and involvement with emerging technology research and innovation. Yet innovative theory, methods, and practices to critically explore algorithmic system controversies and democratic possibilities are still in nascent form. In this paper, we bring together thinking about public pedagogy, infrastructuring, and democratic design to introduce a novel pedagogic encounters toolkit, where pedagogic encounters are defined as: opportunities for learning together about the emergence, relationality, and uncertainty of algorithmic system controversies. A framework with accompanying practices and examples is outlined for researchers, policymakers, and practitioners to creatively and critically explore these pedagogic encounters within specific contexts. We conclude with implications for future investigations attentive to making democratic designs in a datafied society. In doing so, this paper informs theoretical and methodological innovation to the study of algorithmic system controversies and democratising technology.

Introduction

Institutions around the world are striving for ways to make emerging technologies which benefit constituents and mitigate potential harms and controversies. Algorithms, data, and artificial intelligence-based technologies enable increasingly pervasive forms of information processing, knowledge acquisition and decision-making across society which surface differential risks and benefits according to particular contexts, publics and values (Whittlestone et al. Citation2019, 8). Amid increasingly complex ways to participate, connect, and share across society, ‘being digital citizens’ is a continual struggle: between attempts to control and manage citizens, as well as openings for new ways of being (Isin and Ruppert Citation2015, 97–98). The reconfiguring of relations between citizens, technology and the state unsettles traditional forms of participation, governance, and learning. In particular, datafication is defined as a new form of governance reliant upon ‘data-driven practices of categorization, classification, segmentation, selection and scoring’ (Hintz, Dencik, and Wahl-Jorgensen Citation2019, 146). This paper focuses on understanding how algorithmic systems permeate social fields and multiple policy and institutional domains, the attendant controversies emerging from the use of algorithmic systems, plus the potential for new forms of public pedagogy.

Algorithmic systems are characterised by a decision-making process in which algorithms collect and analyse data, derive information, apply this information, and recommend an action. Some systems are more automated than others, including automated decision-making ‘procedures in which decisions are initially – partially or completely – delegated to another person or corporate entity, who then in turn use automatically executed decision-making models to perform an action’ (Algorithm Watch Citation2019, 9). Algorithmic systems are fast becoming a primary mechanism for the administration of policing, welfare, urban planning, education and a wide range of other domains. As this happens, there are significant implications for groups that already experience asymmetries in the exercise of administrative power (Dencik and Sanchez-Monedero Citation2022). In the context of policing, predictive algorithms have been criticised for reinforcing racially based decisions, leading to discrimination of marginalised and low-income populations (Browne Citation2015; Browning and Arrigo Citation2021; O’Donnell Citation2019). In the welfare area, AI-driven algorithmic systems have been legally challenged for extending the use of flawed and biased risk assessment models to introduce disciplinary actions against vulnerable social groups (Park and Humphry Citation2019; Whiteford Citation2021). In education, the use of online proctoring systems raised concerns about privacy and discrimination in the use of facial recognition technologies (Selwyn et al. Citation2023). ‘Smart cities’ have been around for decades and have been the subject of significant scholarly critique and public controversy, identifying how algorithms are used to intensify an already extensive apparatus of policing, profiling and surveillance technologies in urban environments (Browne Citation2015; Shapiro Citation2020; Humphry et al. Citation2021; Humphry et al. Citation2022).

There are growing concerns about whether and how algorithmic systems should be applied in different aspects of society, concerns that we frame as algorithmic system controversies. We are primarily interested in how algorithmic systems mediate the everyday practices of citizens and in this process outsource, or supersede, organisational and professional discretion (Zouridis, Bovens, and Van Eck Citation2019). These systems have been characterised as ‘street-level algorithms’ (Alkhatib and Bernstein Citation2019), which are interacting constantly with citizens, working in a bureaucratic structure with a level of independence, or discretion (influenced by attitude and approach), and, where the potential impact on citizens is fairly extensive (Zouridis, Bovens, and Van Eck Citation2019). Algorithms have been the focus of numerous media and public controversies, which can also be understood in terms of an ‘interactive controversy’ that draws attention to the ‘external relations’ between a technology and society (Jasanoff Citation2017). This external aspect can be linked to ‘epistemic controversies,’ both within and outside scientific fields, where ‘[v]alues and judgements … affect decisions to adopt particular technological systems or trajectories’ (Jasanoff Citation2017, 270). Our key point is that controversies are not simply generated by the media and members of the public after an event, disaster or problem but happen within expert scientific and technological discourse as a means to generate new knowledge and make decisions.

In explaining their model of ‘technical democracy’, Callon, Lascoumes, and Barthe (Citation2009) refer to the uncertain boundaries between the social and the technical and that controversies offer ‘powerful apparatuses for exploring and learning about possible worlds’ (28). Such uncertainties have democratic possibilities, not because they automatically offer a deliberative process, but because they expose otherwise opaque networks and relations and can institute collective learning and transformation of otherwise hidden connections between people with diverse expertise. Relatedly, there is widespread recognition and need to assemble toolboxes of methods and theories that address the dynamism, complexity, and relationality of technology across education and society (Castañeda and Williamson Citation2021). Various strategies and methods for advancing goals of justice and enabling civic intervention focus upon the potential to make technologies more accessible and inclusive, more transparent and open, or more ethical and socially just (James et al. Citationforthcoming). An increasing number of models, examples, and resources seek to support civic participation and public sector bodies in algorithm decision-making and predictive analytics (Data Justice Lab Citation2021; Snow Citation2019). Additionally, there is growing interest in the potential of algorithmic impact assessments to identify and address harms with specific communities (Moss et al. Citation2021), however, the scope of these focus upon accountability through collaborative governance, rather than democratisation. Reconfiguring the interrelationship between technology, society, and democracy, therefore, requires new forms of ‘remaking participation’ and ‘reflexive learning’ infused by critical, open-ended practices (Chilvers and Kearnes Citation2020).

Herein lies the specific gap we seek to address: the potential of an integrative theoretical and methodological approach to explore algorithmic system controversies in dynamic, distributed, and democratic ways. This gap exists as there is currently no integrated framework and process using pedagogical approaches to specifically explore algorithmic system controversies and democratic possibilities. We, therefore, propose a novel framework for researchers, policymakers, and practitioners that builds upon a ‘technical democracy’ approach providing a toolkit framework and accompanying controversies, methods and designs (Callon, Lascoumes, and Barthe Citation2009). Having introduced and applied the framework to some examples of algorithmic controversies in different domains, we conclude with implications for future research, policy, and practice attentive to opening up the tensions and possibilities of this approach for technology, society, and democracy.

Pedagogic encounters toolkit

This section introduces and unpacks the Pedagogic Encounters Toolkit (). This toolkit could be utilised across research, policy and practice contexts so as to enhance collective learning between people with diverse expertise about algorithmic system controversies. The toolkit’s framework, and accompanying practices and examples, are outlined next to signify the potential of pedagogic encounters with algorithmic controversies.

Table 1. Pedagogic encounters toolkit.

Theoretical framework

This section introduces and explores the notion of pedagogic encounters, defined as: opportunities for learning together about the emergence, relationality, and uncertainty of algorithmic systems. To do so, we draw upon interdisciplinary literature to propose the particular pedagogical, infrastructural, and design aspects which imbue these encounters.

Our first aspect – public pedagogy emergence – highlights pedagogical dynamics as evolving from particular contexts (such as learning with controversies). Public pedagogy is a complex and wide-ranging field of inquiry which frames ‘informal and everyday spaces and discourses themselves as innately and pervasively pedagogical.’ (Sandlin, O’Malley, and Burdick Citation2011; Burdick and Sandlin Citation2010, 350). Our use of pedagogy locates education and politics beyond formal or institutionalised sites of learning towards education through democracy: an ongoing struggle ‘not to fix the meaning attributed to democratic education but rather to open the possibilities for new meanings’ (Sant Citation2019, 685). This educational aspect can potentially spark the emergence of pedagogic encounters with algorithmic system controversies.

Our second aspect – infrastructure relationality – foregrounds the distributed datafication of civic, organisational, and social life which underpin algorithmic systems (and the value of testing diverse methods). To date, calls to enhance algorithmic literacy and include it in a broader definition of digital literacy have applied a narrow understanding of education as the knowledge that individual internet users have and how better understanding can improve digital and social inclusion outcomes. Educational approaches to the public implications of algorithms include digital literacy approaches, a subset of which focus on algorithmic literacies (see for example, Rainie and Anderson Citation2017; Reisdorf and Blank Citation2021.). These perspectives, while acknowledging the opaque, or ‘black boxed’ nature of algorithms, provide few avenues for collectively challenging or shaping these as socio-technical assemblages or infrastructures. Conversely, evolving socio-technical conditions require new ways of mobilising diverse publics in understanding, and ‘acting on’ data infrastructures and practices (Milan Citation2019). A focus on infrastructures provides a way of highlighting how algorithmic systems are reliant upon the daily interactions, standards, and classifications of information systems which mostly become visible when they break down or become ‘objects of contention’ (Bowker and Star Citation1999, 3). In addition, ‘infrastructure literacy’ focuses upon ‘how data infrastructures materially organise and instantiate relations between people, things, perspectives and technologies’ (Gray, Gerlitz, and Bounegru Citation2018, 1). This relationality also underpins the potential for users to change technologies-in-practice through creative appropriations, also referred to as ‘artful infrastructuring’ (Karasti and Syrjänen Citation2004). Likewise, an infrastructuring focus can expose the relationality of pedagogic encounters with algorithmic system controversies and help find ways for active engagement by multiple actors in the context of the rapidly changing interrelationship between humans and nonhumans and differential access to resources and skills (Humphry Citation2021).

The third aspect – democratic design uncertainty – aims to unsettle the scope of what is traditionally understood as design (so as to make democratic designs). For instance, participatory design generally focuses upon ‘design-in-use’ that occurs within a project timeframe with identifiable users; in contrast, meta-design focuses upon ‘design-for-design’ about ‘public controversial things’ beyond project boundaries (Ehn Citation2008). In doing so, design seeks to generate ‘agonistic public spaces’: ongoing encounters that connect heterogeneous participants, expose diverse perspectives, support existing and new networks, plus generate shared repertoires and strategies for change (Björgvinson et al. Citation2012). Such democratic designs (or ‘design for democracy’) can surface hegemony, confront power relations, identify themes for contestation, plus new trajectories of collective action (DiSalvo Citation2010; Citation2012). Recognising democracy as an ongoing condition of contestation and dissensus seeks to mobilise the passions of citizens toward ‘democratic designs’ (Mouffe Citation2000, 103). Such a politically infused, or ‘medium design’ approach is responsive to indeterminate ‘protocols of interplay’ such as the ‘parameters for how things interact with each other’ (Easterling Citation2021, 37). This political design aspect can potentially surface the uncertainty of pedagogic encounters with algorithmic system controversies.

Toolkit practices and examples

In this section, we explore how pedagogic encounters can be enacted in practice through three interrelated practices (with supporting examples).

  1. Learning with controversies

Algorithmic system controversies span many areas of public administration and governance: policing, education and urban planning, across many cultural and national contexts (Browne Citation2015; Selwyn et al. Citation2023; Park and Humphry Citation2019; Mann et al. Citation2020). Next, we present two cases that focus on particular algorithmic system controversies, with a focus on key features, stakeholders, and developments. Sharing such cases can help build pedagogic encounters about how algorithmic system controversies emerge, focus on where, how, with who, and what.

Swedish facial recognition technology (FRT) case

The use of facial recognition technology in education is steadily increasing. This includes as surveillance and supplementing school safety measures in the United States, and for authentication purposes in examinations and online learning (Selwyn et al. Citation2023). For the purposes of this paper, we focus on the use of facial recognition in Sweden, where a school district trialled facial recognition in 2019 in partnership with Tieto, a Swedish technology company using inhouse technology, under the programme called ‘Future Classroom’. A class of 22 students would use facial recognition for roll call – a task that could take 10 min with a human teacher. In other words, the value of ‘efficiency’ continues to be one of the most influential values within most discussions of school governance. The school aimed to use this technology in all classes. The school board claimed that 17,280 hours would be saved per year (Skellefteå kommun and tieto Citation2018).

The process for using the system was that students or parents provide consent for the school to use the system. Once consent was provided, photos of the students were taken and stored in a database. The data was photographs and full names only stored on a local area network. When students entered a classroom, their photograph was taken again, and the facial recognition algorithm matched it against the existing database. No outside companies or organisations could access the database, and students not involved in the pilot were not included.

The Swedish data protection authority, under the European Union’s General Data Protection Regulation, fined the school board even though parental consent was obtained and opt-out was possible (Edvarson Citation2019). Overall ‘the Swedish regulator determined that the use was inappropriate because attendance could have been monitored in a less intrusive manner, students had a certain degree of privacy expectations when in the classroom, sensitive personal information was being processed, and the school did not perform an impact assessment – which includes communication with the regulator’ (Luo and Guo Citation2021, 169). A key aspect of the regulator’s finding was that there were significant power imbalances between students and the school. These would be intensified by the use of facial recognition, even if consent was given to use the system.

InLinkUK smart kiosk case

Smart’ street furniture adds Internet-of-Things technologies, connectivity and data-driven advertising to traditional urban forms such as benches, bus stops and payphones. These urban hybrids, sometimes associated with ‘smart city’ visions and strategies, are being encountered by publics at street-level, generating new kinds of interactions, dynamics and effects (Gangneux et al. Citation2022; Humphry Citation2021; Nassar et al. Citation2019). One such example, InLinkUK smart kiosks, is illustrative of an ‘algorithmic system controversy’ in the way that such ‘street-level’ objects can support predictive policing of local communities, in this case through the introduction of a call blocking algorithm to prevent drug-dealing in a low-income borough of south London.

InLinkUK was a joint venture between British Telecom (BT) and US company, Intersection, with outdoor advertising partner, Primesight, established with the aim to replace the BT public payphones with ‘smart’ kiosks in cities throughout the United Kingdom.Footnote1 InLinkUK represented an expansion of the model of LinkNYC, a network of similar kiosks in New York City built and installed by Intersection, a company backed by Google’s urban subsidiary, Sidewalk Labs. Each obelisk shaped unit sported two front-facing digital screens for displaying advertising, and included free services to the public at the point of connection including Wi-Fi, USB phone charging and telephone calls to landline or mobile numbers via the inbuilt tablet (InLinkUK product statement, Citation2019).

Following their introduction, the InLinkUK kiosks raised concerns over their potential for surveillance (Mueller Citation2018; Atkin Citation2018), particularly through the inbuilt cameras (two in the screens and one in the tablet), despite company statements that the cameras were not operational. The modular design – a literal ‘black box’ – housed an array of environmental and mobile sensors, making it difficult to determine what equipment was installed or activated. Uncertainty over its operations and a lack of transparency of the kiosks’ data capabilities also made it difficult to determine the extent to which user and pedestrian data (such as emails provided at registration and MAC addresses of mobile devices), were collected, used and sold (Atkin Citation2018; Gangneux et al. Citation2022).

In late 2018, reports emerged of the InLinkUK kiosks being used for drug dealing activities in Tower Hamlets, a borough of South London. Local police had identified up to 20,000 calls from a small number of kiosks that had purportedly been made to known dealers over a four month period (Burford Citation2019). In response to complaints by local residents, and a public statement by council and police, InLinkUK responded by turning off the calling facility (Ballard Citation2018). In 2019, a ‘call blocking feature’ was installed in the units that used a pattern recognition algorithm to block outgoing calls to certain numbers based on indicators including the frequency of calls, how long calls lasted and ‘insights provided by authorities’ (Fisher Citation2019). The call blocking feature was packaged up and branded as ‘a new automatic anti-social call blocking technology’ by InLinkUK and rolled out to all 494 kiosk units (InLinkUK Citation2019).

Unlike the facial recognition technology trial in Sweden, which drew the attention of media and legal authorities, resulting in wide exposure and a penalty to the school involved, the ‘call-blocking’ algorithm received little public scrutiny and its long-term implications remain obscured, black-boxed in such a way that its political nature remained concealed (Shapiro Citation2020). Indeed, media reporting framed the change as a welcome and needed intervention despite the fact that the call blocking algorithm represented a new means to algorithmically police and problematise already heavily policed neighbourhoods.

  • (ii) Testing diverse methods

Various methods are assembled in this section to explore the relationality of algorithmic system controversies (such as the visible and invisible, technical and political, plus human and nonhuman). Public engagement with algorithms in public services is often discrete, one-off and institution led (Pallet et al. Citation2021). Our endeavour corresponds with growing calls for expanding public engagement with emerging technologies that use multiple techniques, durations, locations, and stakeholders (BEIS Citation2021). We curate a set of diverse methods to explore algorithmic system controversies in multivalent ways. The six methods outlined include: Deliberation, Play, Visualisation, Narration, Refusal, Experimentation. Each method has a definition, and a brief synthesis of interdisciplinary academic and grey literature to illustrate key elements. These methods can be used individually, or in combination, to help build pedagogic encounters.

Deliberation: a method to inspire novel ways of deliberating complex socio-technical issues, by utilising collaborative and inclusive mechanisms, to provoke collaborative dialogue and decision-making across a range of contexts and durations. Deliberation can be defined as a shared public discussion which requires sufficient information for participants to assess alternatives, balanced evaluation of issues, plus equal consideration of diverse participants (Fishkin Citation2009). Deliberation differs in terms of the scope of inquiry, and how it is conducted. For instance, ‘hybrid forums’ (Callon, Lascoumes, and Barthe Citation2009), characterised by ‘open spaces’ for groups, or a collective, aim to explore uncertainties associated with particular controversies. Whereas ‘roundtables’ prioritise shared dialogue and action between diverse stakeholders, so as to ‘stimulate robust, sustained and meaningful public engagement in a complex area of scientific and technological innovation’ (Involve Citation2019, 4). Promising innovations include people’s councils (McQuillan Citation2018) which prioritise inclusive structures to examine and contest machine learning. Also, there is growing interest in algorithmic impact assessments that enable coordination between institutions and stakeholders ‘to identify, minimize, and mitigate harms’ (Moss et al. Citation2021, 7).

Play: a method to inspire novel ways of playing with thresholds, risk, strategy, and immersion in response to new combinations, and applications, of human and non-human intelligence. Play is an activity of self-realisation (Henricks Citation2014): a ‘balancing act’ of being in and out of control, finding out what we can and cannot do – which extends our capabilities through learning about ourselves, others, and the world. Games made with particular intent – known as ‘serious games’ – are also called ‘learning games, games for learning, educational games, and training games’ (Landers Citation2014, 754). Games developed for the general public to play with, and learn about, algorithms and artificial intelligence are growing, such as: algorithms utilised in New York City, the United States criminal legal system, and exam grading in the United Kingdom during the COVID-19 pandemic (Automating NYC Citation2022; Hao and Stray Citation2019; EFS Citation2023). Games and game engines can also help demystify AI (Erler Citation2019).

Visualisation: a method to inspire novel ways of visualising algorithmic systems by communicating the contingency of data selection, mediation, and perception. Visualisation is defined as ‘a set of techniques by which to manage, calculate, and act on a world of incomplete information’ (Halpern Citation2014, 30). As new technologies make the creation and dissemination of visualisations more accessible, critical inquiry about people’s engagement with these semiotic forms of meaning-making and implications for democratic participation is vital (Kennedy and Engebretson Citation2020). The process of ‘doing data differently’ with particular professional contexts, or population groups, offers such pedagogic encounters: surfacing the partiality and complexity of data with educators to generate opportunities for professional, dialogue and reflection about the contingency of data (Burnett, Merchant, and Guest Citation2021); or, developing critical consciousness and resistant practices with young mobile users in regard to personal data (Selwyn and Pangrazio Citation2018). Visualisations could also integrate feminist and design justice perspectives which address power imbalances and prioritise community building for collective change (D’Ignazio and Klein Citation2020; Costanza-Chock Citation2020).

Narration: a method to inspire novel ways of narrating AI perceptions and portrayals, plus science and technology developments – with a particular focus upon varying expertise, contexts, values and beliefs. Narratives are composed of four dimensions (Somers Citation1994): ontological narratives based on stories that people generate to make sense of everyday life; public narratives connected to broader cultural and institutional formations; metanarrativity, such as the master narratives of sociological theory (e.g., Enlightenment, Industrialization, Capitalism); and, conceptual narrativity which intersects preceding narratives with other market, institutional, and organisational forces. The potential of narratives to communicate science and technology information to ‘nonexpert’ audiences is due to narratives being more accessible and engaging than ‘traditional logical-scientific communication’ – although ethical considerations are raised in terms of the strategy selection (persuasion or comprehension), plus appropriate levels of accuracy (Dahlstrom Citation2014, 13167). Narration can be used to include more voices and experiences in the storytelling of technological development, to counter those of dominant market players (Söderström, Paasche, and Klauser Citation2014). Additionally, speculative narratives and fictions (Shapiro Citation2020; Markham Citation2021) have the potential to open up new possibilities about the ways in which algorithmic systems are envisioned and designed. Fictional and non-fictional narratives are central to the understanding and development of science, such as global narratives (e.g., cross-cultural comparisons), media coverage, and individual case studies (Royal Society Citation2018).

Refusal: A method which seeks to inspire novel ways of refusing the injustices, inequalities, and oppression amplified with algorithmic systems. A ‘politics of refusal’ calls for ‘broader national and international movements that refuse technology-first approaches and focus on addressing underlying inequities and injustices’ (Crawford Citation2021, 226–227). Linked to abolitionist (Benjamin Citation2019) and data justice approaches (Data Justice Lab Citation2021), refusal is an emerging movement of collective response and solidarity to contest and reconfigure technologically mediated relationships (James et al. forthcoming). Refusal is multivalent in its approach and application. For example, The Feminist Data Manifest-No is ‘a declaration of refusal and commitment’ based on diverse feminist thinking to insist upon radical and alternate futures (Cifor et al. Citation2019). In the context of academic publishing and its digital binds, opportunities to dissuade, detour, and disrupt existing practices are framed as ‘ethical executions of code’ (Swist and Magee Citation2017). For the pursuit of democratic agency in the datafied society, data activism offers both ‘proactive’ and ‘reactive’ forms of intervention, such as data-based advocacy or encryption practices (Milan and van der Velden Citation2016).

Experimentation: a method to inspire novel ways of experimenting with technologies, practices, policy-making, and diverse publics. The ‘politics’ of experiments, characterised as interventions reliant on material devices demonstrating both the ‘public proofs’ and uncertainty of such experiments, offers a unique way to study democracy (Laurent Citation2016). In doing so, experimentation can explore both technical and ethical possibilities, such as prototyping participatory platforms with diverse stakeholders and expertise to address urban challenges ( Swist and Magee Citation2019; Swist et al. Citation2017). An age of datafication invites experimentation about emerging models of governance, including: data sharing pools, data cooperatives, public data trusts, personal data sovereignty, policy dashboards, and online civic consultation platforms (Micheli et al. Citation2020; Maffei, Leoni, and Villari Citation2020). In addition, hackathons seek to develop specific minimum viable products, or data science solutions, with particular groups, issues and sectors in a short timeframe (Falk, Kannabiran, and Hansen Citation2021; Turing Institute 2023; Hendery et al. Citation2019). Another growing experimentation phenomenon are lab and studio-style approaches to support policymaking, collective learning, and diverse expertise about AI and other emerging technologies (CIFAR Citation2020; Kimbell Citation2019; EFS Citation2023). Whichever form participatory experiments take, they potentially enable ‘the assembling of new collectives around contentious objects’ (Lezaun, Marres, and Tironi Citation2017, 34).

  • (iii) Making democratic designs

Next, we demonstrate how particular cases and methods can be combined to make democratic designs. This style of design inquiry and practice aims to reveal and confront power relations to create an ‘agonistic collective’: ‘a participatory space of contest in which those structures and exclusions might be experientially encountered and challenged and alternatives offered’ (DiSalvo Citation2012, 118). The democratic designs outlined below are illustrative of how select cases and methods can help surface the uncertainty of pedagogic encounters.

School network visibility map

The Swedish example of facial recognition technology in school could be reconfigured into a design of a ‘school network visibility map’. This map would outline the dynamics and uncertainty of FRT, highlighting that the same technology can be used for multiple purposes. FRT is ubiquitous and often developed as domain application neutral. To make the context of this application more apparent, we could envisage using visualisation and refusal to highlight key parts of facial recognition technology used in attendance taking. Visualisation would create a network map of a school that rather than just showing how data is collected and generated in attendance taking, could highlight the ways in which other networks operate to create data relationships. This could be visualisations of analogue relationships such as teachers asking about student wellbeing, that then feed into other digital networks such as student information systems. The point is that this visualisation uses what is seen as non-useful digital data collection (the use of FRT in attendance) but also highlights already existing digital data networks. It provides a way of asking further questions about all types of algorithmic systems at use in schools, alongside their interconnections (both nascent and established). In doing so, this visualisation of infrastructure becomes a political design, or artefact, that helps to surface the complexity of algorithmic system interrelationships in schools (such as the visible and invisible, technical and political, plus human and nonhuman).

Mock attendance camouflage event

A pedagogic encounters approach to the Swedish FRT case that takes up refusal to showcase uncertainty would be a ‘mock attendance camouflage event’. This event would draw on some of the countermeasures undertaken by protestors to deny FRT accurate readings (Murphy Citation2021). These measures include use of lasers, or different graphics on clothing. The latter includes what are called ‘adversarial t-shirts’ that have designs that can confound object recognition software like FRT (Zolfi et al. Citation2021). Other examples include the use of face camouflage that deny FRT from reading the face geometries necessary for identification (e.g., chin, eye shape, etc). One approach, called ‘mask spoofing’ has been shown to fool FRT systems, in which a mask is used to highlight how FRT makes inaccurate readings (Kose and Dugelay Citation2014). In our example, a mock attendance could be taken of some students wearing masks and others not, with the outcome highlighting the misrecognition that occurs. This use of refusal creates another pedagogic encounter – it highlights how countermeasures are not normative – they are simultaneously tools of protestors, and areas of research for computer scientists, aiming to improve machine learning systems like FRT. This mock attendance would therefore support collective learning by generating an encounter that exposes the uncertainty associated with FRT, alongside the broader networks and power relations associated with algorithmic system controversies.

Urban community deliberation process

The design of an ‘urban community deliberation process’ would be a significant way to uncover and address the uncertainty of complex socio-technical issues. As previously noted, deliberation involves shared public discussion with sufficient information for participants to assess alternatives, balance evaluation of issues and give equal consideration to diverse participants (Fishkin Citation2009). This was missing in this case of the introduction of a call blocking algorithm into InLinkUK kiosks in Tower Hamlets, an ethnically diverse and rapidly growing area of south London. Deliberation could have provided a way for the community to respond differently and assess the algorithmic impacts, thereby revealing a hidden ‘algorithmic system controversy’. Involving a wide range of local residents in a dialogue about the concerning uses of the kiosks could have generated community-led responses that addressed root causes of poverty, lack of infrastructure and inequality, rather than their symptoms. It could have revealed the partnership between police and the technology supplier in the development of the algorithm, opening up the ‘black box’ to reveal new socio-technical connections, relations and uncertainties. Instead, police acting in concert with the technology supplier, locked out the community from involvement in their own affairs and scrutiny over the technological solution implemented without accountability or oversight over its impacts or future uses.

Deliberation processes can support collective learning about algorithmic systems that go beyond individual literacy. For example, by involving community members in decision-making about whether a call blocking algorithm was the best solution to address the issues at Tower Hamlets, residents would have developed an understanding of: (i) how algorithms work; (ii) the data sources and filters used by the specific algorithm; (iii) an assessment of its aim, scope and target; (iv) what data sharing arrangements were in place and how these were governed; (v) what forms of accountability and/or protections were in place to prevent the use and sharing of this information; and, (vi) provisions to ensure future changes and uses were not instigated without consent. This learning process would equip communities with a deeper contextual understanding not only of the call blocking algorithm but also of the modular expandability of the InLinks. Ultimately, deliberation could have provided a more democratic and inclusive basis for a community evaluation of the proposed algorithmic solution as well as an option to refuse it if it was deemed not to be suitable or beneficial. The InLinkUK call blocking example demonstrates how fostering a public pedagogy via pedagogic encounters can highlight important issues not only of privacy and transparency but also of the expansion of police powers using algorithmic systems on already over-policed and racialised communities.

Smart kiosk possibilities

Designing opportunities to generate ‘smart kiosk possibilities’ is another example of surfacing uncertainty about algorithmic system controversies. Speculative narratives are future-oriented accounts of what could happen when an algorithmic system is introduced and allows for scrutiny of its potential harms as well as its justifications. Shapiro (Citation2020) argues that conceptualising systems in their logical extreme is necessary to highlight their dangerous potential. The case of the InLinkUK call blocking algorithm provides an initial example of such a suppositious examination, which could be further expanded to draw out the implications of how such a system could support the strengthening of legal enforcement in a community that is already heavily policed and sets the stage for future algorithmic interventions. The point of such narrative exercises, which can be combined with other pedagogical methods in the toolkit such as deliberation, play, visualisation or experimentation, is not only to understand what an algorithm does but what it could do, and the paradigms of predictive policing and urban governance that they help to support and promote.

Discussion

The pedagogic encounters toolkit offers a selection of cases, methods, and designs to learn together about the emergence, relationality, and uncertainty of algorithmic systems and democratic possibilities. While the toolkit’s application in this paper focuses upon education and urban contexts, the potential audience for this toolkit is open to researchers, policymakers and practitioners from various disciplines and sectors. This is because algorithmic system controversies are cross-sectoral and transdisciplinary, which means that emerging technology challenges and opportunities are complex issues that no single sector, government, or discipline can address alone (Whittlestone et al. Citation2019). The toolkit could also work usefully with other fields such as explainable AI, to examine the limitations of participation in intervening in AI controversies, where what is human and machine is not clearly delineated, and responsibility is diffused across a digitally networked society.

We anticipate, and invite, readers to join us in collectively exploring this toolkit in relation to other contexts. As algorithmic systems serve to outsource, or supersede, organisational discretion, we hope that multiple government, professional and industry bodies can collectively curate the toolkit to inform future research, policy and practice. Co-curation can be achieved, for example, by expanding the set of cases to better understand algorithmic systems in context, trialling proposed methods or adding new ones, and building a collection of democratic designs which can be adopted, or adapted, for different settings.

The pedagogic encounters toolkit can also potentially inform future research in relation to planetary-scale algorithmic system controversies, such as the innovation associated with the COVID-19 pandemic. This global crisis has accelerated the use of algorithmic systems for policing, welfare, education and urban regulation. Many nations have leveraged existing health, urban and mobile data sources, in combination with smart sensing technologies, QR codes, CCTV, drones, robots and facial recognition to develop large scale public health order systems. In some ways these function as colossal, planetary-scale algorithmic assemblages that dynamically feed in data and output actionable knowledge across a wide range of domains and agencies (for example, contact tracing, controlling movements, securitising workplaces and policing borders). While these systems have been put in place as emergency public health measures to prevent the spread of the virus, they indicate an extensification of these assemblages and forms of ‘smart’ governance on a scale with consequences and future uses we are yet to fully understand.

Conclusion

In this paper we presented a novel conceptual and methodological toolkit for better understanding, and responding to, algorithmic systems controversies. We invented this research apparatus to ‘slow down’ expert reasoning and redistribute expertise in the design and conduct of research practices, so as to spark more diverse opportunities for knowledge politics to emerge (Whatmore and Landström Citation2011). The specific purpose of our endeavour was to open up forms of participation and expertise in a datafied society, by creating new ways of thinking and action focused upon pedagogic encounters: learning together about the emergence, relationality, and uncertainty of algorithmic system controversies and democratic possibilities. Our overall aim has been to begin to creatively surface, and pragmatically address, the inherent complexities that are part of algorithmic systems in society. In doing so, we have begun to collectively curate a pedagogic encounters toolkit that has the potential to inspire, and critically expand, public engagement with evolving technologies, society, and democracy.

Acknowledgements

A version of this paper was presented at the Australian Association for Research in Education (AARE) conference in 2021 and at a Technical Democracy Collective Workshop hosted at University of Sydney in 2022, and we thank participants for their feedback.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This project was funded by the Australian Research Council and the University of Sydney.

Notes

1 In 2019, InLinkUK went into bankruptcy and the network of 494 kiosks at the time was taken over by British Telecom who rebranded the kiosks ‘digital street hubs’.

References