272
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Youth perspectives on technology ethics: analysis of teens’ ethical reflections on AI in learning activities

, , , , , , & show all
Received 14 Sep 2023, Accepted 15 Apr 2024, Published online: 14 May 2024

ABSTRACT

The integration of AI technologies in all domains of life raises ethical debates. Children and young people are not exempt from these discussions since AI is also an increasing part of their lives. This study explores teenagers’ (aged 13 to 16 years) views on AI with a focus on ethical thinking. This is a retrospective empirical study in which data from three cases involving young people in technology learning activities have been analysed to answer the following research questions: (i) What ethical challenges around AI do young people reflect upon and (ii) what ethical dimensions do these reflections connect to? We have adapted an existing framework outlining dimensions of ethical thinking to identify young people’s ethical reflections on AI in various learning activities. Our findings show how young people reflect on AI ethics, even without dedicated teaching on it and are able to consider various dimensions and make connections to everyday uses of AI raising ethical issues. Based on our findings, we advocate for an integrated approach of AI and ethics in learning activities about technology. We believe the framework presented in this study can be a valuable tool to inform further research and practice.

1. Introduction

Integration of Artificial Intelligence (AI) in digital technologies is making AI increasingly present in our everyday lives (Ashok et al. Citation2022; Williams et al. Citation2022). Systems that are becoming dependent on AI range from energy grids, healthcare, food supply chains, banking to communication, education and learning. While there is recognition of the positive impacts of AI to tackle societal challenges, such socio-technical transformation is not exempt from tensions (Ashok et al. Citation2022). AI is raising ethical dilemmas, not only because it is regarded as a turning point in the organisation of society (Floridi Citation2018), but also because AI misuses have shown to have a huge negative impact as shown in recent scandals such as Cambridge Analytica (Venturini and Rogers Citation2019), the influence on voters’ behaviours in election processes through social media bots (Gorodnichenko, Pham, and Talavera Citation2021), as well as the discrimination and exclusion of various groups due to gender, ethnicity and race biases (Khalil et al. Citation2020; Lambrecht and Tucker Citation2019). These wrongdoings with AI implementations have put on the spotlight the ethics of AI.

The increasing dependency on AI systems generates concern among people since many negative impacts such as the increase of social inequalities are rooted in AI design, with biases affecting the whole development process (Arista et al. Citation2021). Flawed AI systems have contributed to detonating what has been referred to as a tech ethics crisis, exposing the limitations of the technocratic paradigm dominating the computer science field (Raji, Scheuerman, and Amironesei Citation2021). As part of the measures to ensure ethical AI systems, several guidelines have been produced, such as the Ethics guidelines for trustworthy AI by the High-Level Expert Group on Artificial Intelligence (European Commission Citation2019). These guidelines specify the ethics requirements that AI must meet, emphasising the respect for human dignity and a ‘human-centred approach’. Significant within such guidelines are respect for human autonomy (respect for freedom, autonomy, self-determination, choice), prevention of harm (mental, physical, considering safety, security, robustness), fairness (equality, free from bias, prevention of discrimination, stigmatisation) and explicability (explainability, transparency). In recent material produced by the European Commission (Citation2019), seven key requirements for AI are listed: Human agency and oversight; Technical robustness and safety; Privacy and Data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; and Accountability. Similar requirements are also discussed in other documents such as the Ethically Aligned Design guidelines (IEEE Citation2017). Ethical guidelines have also been issued regarding AI use, such as the Association for Learning Technology’s Framework for Ethical Learning Technology (ALT Citation2021), and the Association of Internet Researchers’ ethical guidelines (Franzke et al. Citation2020).

Because of the pervasiveness of AI in all domains of life, scholars have warned against delegating ethics thinking to an elite, calling for increasing the participation of diverse stakeholders in AI systems critique (Raji, Scheuerman, and Amironesei Citation2021). Considering that children and young people today are already growing in an era mediated by AI, scholars and educators argue for recognising them as social actors affected by AI (DiPaola, Payne, and Breazeal Citation2020; Williams et al. Citation2022). From a policy perspective, guidance documents such as UNICEF AI for Children have explored how to embed children’s rights in AI regulations (UNICEF Citation2021b), and children’s and young people’s perceptions on AI have been the focus of study (UNICEF Citation2021a). Documents like the UNICEF report have mapped AI opportunities and risks as perceived by young people, calling for including AI in formal education curriculum to increase understanding of AI. Although children’s views of technology and AI ethics have been explored and children’s technology and AI education has been developed through some studies, the literature is still quite limited in understanding children’s views on AI ethics. The youth play crucial roles by participating in addressing and solving the various polycrises being faced by the world, such as growing technical equities in terms of access, use of resources and labour, contributing further to the climate crisis, as they are set to be impacted the most by a future world that is inequitable and uninhabitable (UNICEF Citation2022). Thus, nurturing critical and ethical perspectives in youth towards technology development, especially AI systems, is crucial to ensure that they question the current trends and as the next generation of designers, developers, and other experts are already mindful of the costs and impact of these technologies on their everyday lives and on society in general. This paper contributes to the fields of technology ethics education and child-computer interaction (CCI) by exploring teenagers’ (aged 13 to 16 years) views on AI with a focus on ethical thinking. Our analysis is guided by the following research questions: i) What ethical challenges around AI do young people reflect upon and ii) what ethical dimensions do these reflections connect to? Through this study we seek to advance research and educational practice on AI technology involving youth with a focus on ethical thinking.

We believe this study tackles a timely and relevant issue from a novel perspective that takes into consideration the specificities of ethical thinking about AI among young people. In the following sections, we review how ethics has been approached in technology learning and CCI. We describe three cases examined in this empirical study and introduce our analysis framework, which is an adaptation of Reiss (Citation2010) dimensions of ethical thinking. Then, we describe the findings stemming from the qualitative analysis of the cases and discuss them considering existing literature. We conclude with implications for practice and recommendations for future research.

2. Related research

2.1. Learning about technology ethics

Ethics is related to what we think is morally wrong or right, and how we end up with that decision (Reiss Citation2010). A variety of ethical frameworks exist, based on which we all make these decisions in our everyday lives, often unconsciously and without engaging in the rational decision-making process. We grow up within these frameworks that are part of the moral traditions in our cultural contexts. Often, they can be linked with religion, although in the current multicultural and pluralist societies religious morals are not always hegemonies (Reiss Citation2010). In the Western tradition, a common ethical framework is consequentialism – a view that nothing is right or wrong per se; instead, consequences of those things determine whether something is eventually right or wrong (McKim Citation2010). In hedonism, also referred to as hedonistic utilitarianism, the attention is also placed on the consequences, although in this case the focus is on positive consequences with an emphasis on pleasure (Brink Citation2007). Utilitarian perspectives can be contrasted with the deontological approach, where certain actions can be seen as right or wrong, regardless of the consequences. For example, according to the deontological approach, killing somebody could be considered intrinsically wrong, regardless of consequences or reasoning behind the action. In virtue ethics, moral characteristics of people are weighted, and certain characteristics are considered more virtuous than others (Reiss Citation2010). Ecological ethics drive the attention to relations and their interdependencies, broadening the context of moral decision-making (Flinders Citation1992, Rolston Citation2017). In addition to these ethical frameworks, in recent years there has been a growing interest in ethics of care, which emphasises relational practices based on mutual recognition and development (Larrabee Citation2016).

Ethics pedagogy has been a focus of debate, with positions in favour and against making ethics the subject of study in school settings. John Dewey is one of the most influential voices calling for including ethics as a subject in the school curricula arguing that learning about ethics contributes to developing an open-minded attitude, moral imagination and skills for dealing with everyday dilemmas in general (Boydston et al. Citation2008). In psychologically-informed guidelines for ethical curriculum, such as the ones advocated in Rest’s model (Rest Citation1982), ethical thinking is presented as a progression through a number of stages with four key components: ethical sensitivity, reasoning, motivation and implementation. In the context of science education, scholars and educators have called for addressing socioscientific issues in the classroom, and various frameworks for ethical thinking have been proposed (see for instance Beauchamp and Childress’ (Citation1983; Citation2008) four principles consisting in promoting beneficence, non-maleficence, autonomy, and acting justice; Reiss’ (Citation2003; Citation2006) selection of consequences, autonomy, right and duties, and virtue or care based ethics as main frameworks; Sadler and Zeidler’s (Citation2004) focus on deontology, consequentialism and care as moral philosophies to guide ethical decision-making, and Saunders and Rennie’s (Citation2013) framework advocating for the inclusion of pluralistic thinking.

Recent literature on technology and ethics has pointed at the scarce support that schools, and even higher education institutions, offer students to learn about the links between ethics and technology (Vakil and McKinney de Royston Citation2022). The prevalence of instructional approaches in technology education that uncritically assume the objectivity and neutrality of technology is questioned (Holmes et al. Citation2021; Holmes and Tuomi Citation2022; Ko et al. Citation2020; Vakil Citation2018) as this is shown limited to face the complex scenario created by the integration of AI in everyday life. Also, as some scholars in the AI education community have noticed, researchers in this field do not receive support through training to face the ethical questions linked to AI in education and learning environments (Holmes et al. Citation2021). In this regard, an increasing number of voices in technology and education are calling for better connecting technology education with ethical reflection to better support students to understand the complexity and potential impact of emergent technologies like AI (see e.g. Eglash et al. Citation2020; Kafai et al. Citation2021; Kaspersen et al. Citation2022; McGee and Bentley Citation2017; Pinkard et al. Citation2017; Vakil Citation2018).

Initiatives that foster students’ ethical thinking on technology can be differentiated based on the pedagogical strategy adopted, the most popular ones focusing on the implications of already existing systems (see e.g. Williams et al. Citation2022), embedding ethics as part of technical lessons (see e.g. DiPaola, Payne, and Breazeal Citation2020; Saltz et al. Citation2019), and connecting ethics with design activities (see e.g. Levick-Parkin et al. Citation2021). In all of these cases, the aim was to support reflection on the values embedded in technology, opening ethical discussions on various aspects such as human rights, democracy, and people’s capability of action in a future digitalised society.

Some examples of initiatives promoting students’ reflection on technology’s ethical issues have used conversations and narratives, scenarios, as well as games and collaborative tools to support analytical and critical thinking (see for instance Briggle et al. Citation2016; Gisewhite Citation2023; Kaspersen et al. Citation2022; Lee et al. Citation2022; Mortari and Ubbiali Citation2017; York and Conley Citation2020). Embedding ethical thinking in technology design activities has been used as a strategy to make students aware of the many decisions part of the design process and the values underlying those decisions. The assumption is that such activities enable students to understand how technical systems work, making visible the values and power relations underlying such systems (Kafai and Peppler Citation2011). Some examples of approaches tackling ethics in technology design activities can be found in critical making (Ratto, Citation2011), justice-centred computing (Vakil Citation2018), computational empowerment (Schaper et al. Citation2022) and culturally relevant computing (Kafai et al. Citation2021; Scott, Sheridan, and Clark Citation2015). While exposing inequalities and subverting power tends to be at the centre of these approaches, design has also been used to trigger reflection and expand learners’ imaginaries of what technology should and should not do (see for instance Levick-Parkin et al. Citation2021; York and Conley Citation2020).

In the context of science and technology education, there is a corpus of works focusing on how students think about ethics (Buntting and Ryan Citation2010; DeLuca Citation2010; Kibert et al. Citation2012; Steele Citation2016) with frameworks to assess students’ ethical thinking being developed (see for instance, Kohlberg Citation1975; Steele Citation2016 and Reiss Citation2010). The present study builds on parts of Reiss (Citation2010) framework, focusing on several dimensions to explore teenagers’ (aged between 13 and 16 years) ethical thinking. We believe the insights presented in this paper are valuable to support the development of learning activities tackling ethical issues with AI.

2.2. Ethics in child computer interaction

Given that children are considered vulnerable both as a participant group in research activities as well as users of technology, it is not surprising that ethical issues have been underscored in working with them. Thus, in addition to empirical studies, one can find workshops, panels, and a comprehensive literature review scrutinising how ethics has been addressed in CCI research (e.g. Antle et al. Citation2020; Constantin et al. Citation2022; Frauenberger et al. Citation2018; Van Mechelen et al. Citation2020; Waycott et al. Citation2017). To a great extent, the emphasis of these efforts has been on the development and use of procedural research ethics (Van Mechelen et al. Citation2020), with an emphasis on respecting children’s rights in the vein of UN convention on the rights of the child (United Nations Citation1989). The cases reported in this study align with the UN convention on children’s rights (Citation1989), in particular with articles: Art.12 (respect for the views of the child), Art.13 (freedom of expression), Art.28 (right to education), Art.29 (goals of education), Art.42 (knowledge of rights).

Limitations in addressing research ethics have also been identified in the literature, for example in reporting on formal ethics approval procedures or in situational ethics practices (Van Mechelen et al. Citation2020). As a related topic to research ethics, participation ethics has also received some attention in the CCI literature, in the sense of ethical aspects intermingled in children’s participation and participatory design with them, addressing questions around recruiting and informing of children, attribution, ownership, consent and impact, as well as complex relationships and power relations within the design team (e.g. McNally et al. Citation2016; Read et al. Citation2013; Spiel et al. Citation2018; Spiel et al. Citation2020). However, surprisingly few CCI studies report on participation ethics related issues (Van Mechelen et al. Citation2020). Similarly, design ethics, referring to understanding the ethical impacts of design and various digital technologies, has received little attention in CCI (Van Mechelen et al. Citation2020). Yet, there is an increasing interest in the topic due to emerging technologies with obvious ethical issues associated with them (e.g. Ali et al. Citation2021; Antle et al. Citation2022; Bilstrup, Kaspersen, and Petersen Citation2020; Cagiltay et al. Citation2020; DiPaola, Payne, and Breazeal Citation2020; Grammenos Citation2016; Hourcade et al. Citation2023 Pavarini et al. Citation2020; Rubegni, Malinverni, and Yip Citation2022; Schaper et al. Citation2022; Schaper, Malinverni, and Valero Citation2020). Moreover, recent studies show that children can reflect on ethical issues relating to design and digital technology when properly scaffolded (Antle et al. Citation2022; Bilstrup, Kaspersen, and Petersen Citation2020; DiPaola, Payne, and Breazeal Citation2020; Kaspersen et al. Citation2022; Rubegni, Malinverni, and Yip Citation2022; Schaper et al. Citation2022; Schaper, Malinverni, and Valero Citation2020). In this line of research, recent empirical studies have analyzed how design activities on AI technologies contribute to triggering participants’ ethical thinking, helping them to identify underlying design agendas and discuss issues connected, e.g. to privacy, accountability, and responsibility (Bilstrup, Kaspersen, and Petersen Citation2020; DiPaola, Payne, and Breazeal Citation2020).

When exploring ethical issues, CCI scholars have included children in these explorations utilising a variety of approaches (Schaper et al. Citation2022). Among those, design futuring approaches have been utilised in several studies, where children are oriented to the future to critically discuss the ethical implications of AI-based technologies, for example, to consider the issues with algorithmic decision-making (Skinner, Brown, and Walsh Citation2020) and with facial recognition technologies (Tamashiro et al. Citation2021). Skinner, Brown, and Walsh (Citation2020) worked with children to envision the role and responsibilities of a future AI-librarian, focusing on ethical implications of who is serviced first and why, and what it means with regard to power and privilege. Tamashiro et al. (Citation2021) designed their future fiction with life on Mars, encouraging participants to consider ways to game a facial recognition system to keep their identities hidden, and what that would mean in the context of privacy. Ventä-Olkkonen et al. (Citation2021) employed design fiction to tackle the societal issue relating to inclusion, with children working in groups to envision future technological solutions. This revealed nuanced ethical implications of such technologies, including appropriate forms of interventions. Sharma et al. (Citation2022) build a case for design-futuring approaches as mechanisms for inclusion and diversity, presenting several examples of children critically designing for inclusive and diverse futures. These approaches have enabled children to critically reflect on the ethical implications of future technologies, especially concerning issues of privacy, inclusion, and diversity. These concerns resonate with the broader ethical implications of using AI technologies with children (UNICEF Policy guidelines on Children and AI) and ethical AI principles more generally. The cases presented in this study connect with UNICEF’s recent works on AI (UNICEF Citation2021a; UNICEF Citation2021b), sharing an interest in providing understanding of young people’s views and fostering their active participation in discussions on technology and AI.

While research in CCI has shown increasing interest in ethical issues surrounding research, design, and digital technology, the existing research is still limited for instance in defining ethics and in its theoretical basing as well in addressing design ethics with children (Antle and Kitson Citation2021; Chan Citation2018; Van Mechelen et al. Citation2020; Yarosh et al. Citation2011). This study contributes to filling in these gaps by showcasing how to consider various dimensions of ethical thinking on and with AI technologies when teaching technology design ethics.

3. Methodology

3.1. Lens to analyze young people’s ethical thinking

The research questions guiding the empirical analysis of three cases involving young people in technology learning activities are: (i) What ethical challenges around AI do young people reflect upon and (ii) what ethical dimensions do these reflections connect to? To respond to these questions, this study builds upon prior science and technology education research focusing on how students think about ethics (Buntting and Ryan Citation2010; DeLuca Citation2010; Kibert et al. Citation2012; Steele Citation2016) with frameworks to assess the development of peoples’ ethical thinking (see for instance, Kohlberg Citation1975; Steele Citation2016; and Reiss Citation2010).

In this study, we build on Reiss (Citation2010) dimensions of ethical thinking, with some adaptations to fit our specific cases. It is worth highlighting that Reiss’ framework (Citation2010) has a broad scope, and it already encompasses other frameworks on ethical thinking in the context of science and technology education. In the context of this study, Reiss’ (Citation2010) framework was chosen because in addition to identifying young people’s ethical frameworks, it includes other dimensions that help in depicting young people’s ethical thinking. The dimensions identified by Reiss include the scope and perspective adopted by individuals, the ethical framework(s) underlying their reflections, the interests and temporality considered, and the knowledge displayed (see ).

Table 1. Adaptation of Reiss (Citation2010) ethical thinking dimensions.

While Reiss proposes a set of indicators to trace moves in ethical thinking towards more complex and nuanced ethical reflections, we believe the indicators are valuable also in taking a snapshot that captures young people’s ethical thinking at a given moment. These snapshots reveal how young people think and discuss such ethical issues in their everyday lives. This gives an opportunity to explore what comes intuitively to youth with regards to the vocabulary used, the issues mentioned, and how they consider, critically, such issues when prompted. Through these lenses of young people’s ethical thinking, we can build our understanding of the issues important to them and consider how to engage them further in ethical considerations.

3.2. Description of the cases and data collection

In this study, we analyze data from three cases based on learning activities that took place in different education environments in Finland, city of Oulu to gain insight on teenagers’ lived experiences with AI technology, capturing their perspectives. The three cases examined in this study were selected as (i) they were thematically coherent and intrinsically connected to ethical issues in science and technology, (ii) involved participants of a similar age (between 13 and 16 years old), (iii) took place in education environments and iv) combined discussion and reflection with design and making activities. The similarities among cases allow us to analyse data both within each situation and across situations.

All three cases presented in this study aimed at supporting youth’s understanding of AI and digital technologies, while triggering ethical reflections by problematising certain aspects of socio-technical practices (see ). Two of the cases took place in local schools, while one case was entirely situated at university premises. In two cases, activities were also conducted in a Fab Lab, a digital fabrication laboratory equipped with a standardised array of computer-controlled tools. Novice users typically utilise tools such as CAD (computer-aided design) software, 3D printers, vinyl cutters, and electronics tools to create low-fi prototypes of personal projects. In Case #1, there was only a short Fab Lab visit, to give students an initial understanding of what can be done in a Fab Lab. In Case #2, working in a Fab Lab was a central part of activities as participating students worked in groups to imagine a future classroom and create tangible low-fi prototypes of these imagined technologies in these future classrooms using a laser cutter and 3D printer. Building on these imaginaries, the ethical implications of AI were critiques in the subsequent sessions in the school’s classroom.

Table 2. Cases analyzed in this study.

Case #1 Overview. This case with 21 9th grade students was commenced as part of a work practice week which is part of the national curriculum. Our study was presented to the students as an option for this work practice program where students engage in a 1–2-week internship at a workplace that meets specific criteria. The choice of the workplace is left to the students, who can select which place to apply to based on their personal interests or connections. To recruit participants to the study, a local international school was contacted, and the study discussed with the school’s Student Counsellor. Invited by the counsellor, four researchers representing different fields (educational technology, information systems and electrical engineering, and information studies) visited the school to briefly present the project. A flyer was also provided to the pupils via the counsellor before the visit. This particular international school was selected to facilitate data collection in English, given the involvement of international researchers in the study. This public school follows the national curriculum using the International Baccalaureate framework. All students who demonstrated interest were welcomed to participate in both the work practice and the study. Prior to their involvement, informed consent was obtained from the participants and their legal guardians.

Method: The objectives of the work practice week included learning about AI technologies and their impact; gaining research experience both as a research participant and a co-researcher and developing ideas for AI-powered technologies. As part of the activities, the participants reflected on AI and its use in different settings; used an AI assistant to operate a robot hand they assembled; and participated in a business design workshop to generate solutions using AI for good working in groups of three to four members. At the end of the week, the teenagers presented their business ideas following a shark tank format. In personal interviews, the participants were asked questions about their conceptions and thoughts about AI, and specifically, their everyday experiences of AI, including the ways in which AI tools are used to recommend and generate media content.

Case 2 Overview. This workshop was organised with one 9th grade class (n = 18) at a local public high school when an English teacher requested collaboration, after learning about the researchers’ activities in the city’s summer festival on Science, Technology, Engineering, Arts, and Mathematics (STEAM) education. While the school caters to children living locally (that is, within a certain distance from the school) and teaching is mainly conducted in Finnish. Our study was seamlessly integrated into the students’ routine English curriculum. This integration was driven by the teacher’s aim for the students to interact with international researchers, to hone their spoken English competencies. Simultaneously, it provided an opportunity for the students to delve into a STEAM topic. Three researchers facilitated the workshop sessions, with the support of the students’ English class teacher. Informed consent and assent were requested from guardians and students before any data collection actions.

Method: The workshop started with a 3.5-hour session at a local FabLab, where participants (n = 18) were introduced to AI/Machine Learning in everyday life. Participants worked in small groups (n = 3) to imagine a future teacher, draw their imaginaries, and create low-fi prototypes using laser cutters, 3D printers and other materials (arts and crafts), and presented their designs to others. In the second and third sessions (75  minutes each), researchers visited the school to discuss concepts of ownership and fairness with respect to AI generated content and algorithmic decision-making, through various examples. Participants were encouraged to consider intended and unintended consequences of using AI in their everyday lives and relating those issues to their future-teachers (designed in the FabLab) and how those future entities could be made more fair, trustworthy, and accountable.

Case 3 overview. This study was carried out with two seventh grade classes with students (N = 38) aged between 13-15. The study is part of a critical design and making project spanning four years and was carried out in collaboration with the city of [blinded]. The project addressed the city representatives’ suggestion for examining how critical design and making could be used to address bullying in school context. During the project, we collaborated with three different schools and numerous classes. The study described in this paper was conducted in a local high school where we collaborated with an IT teacher. The study was incorporated into the students’ IT classes, serving as a component of their comprehensive computing education embedded within their curriculum. Before the activities, the students were divided by the teacher into eight groups, each with four to five members. The sessions were held separately for the two classes, both including four groups which participated weekly for six weeks. The study involved seven researchers of which three conducted the sessions supported by the teacher and participated in data collection. Informed consent and assent were asked both from the students and their legal guardians.

Method: The sessions analyzed in this study centred on bullying and design activism. During the sessions, the teenagers reflected on bullying and how it shows in their community and ideated solutions to produce a change through apps and activism campaigns. While the project activities did not explicitly refer to AI or ethics, participants’ reflections tackled ethical challenges related to AI technologies and some of the app designs produced included AI components. The sessions’ structure and content were prepared by the researchers who shared it with the teacher in advance for commenting and refining. A typical session included a recap of what had been previously done, an introduction to the session topics, independent work by the teens, and a short recap at the end. The sessions included both individual and group tasks.

3.3. Data analysis

The process for analysing the data was guided by the research questions, which are two-fold: (i) What ethical challenges around AI do young people reflect upon and (ii) what ethical dimensions do these reflections connect to?

The first phase of the analysis was inductive and focused on identifying the ethical challenges young people reflected upon. Depending on the case, this analysis was based on transcribed interview data, teenagers’ presentations, and group discussions, questionnaire responses, paper prototypes (drawings and textual descriptions of applications), and/or business plans. These data were thematically analysed treating meaning units as unit of analysis, that is, constellations of words or visual elements referring to the same central meaning. They were coded and then categorised into themes representing central ethical challenges. The ethical challenges identified in each case were further jointly discussed by the researchers to identify the most prominent ones to examine in more detail. In this study, we have opted for a closer examination of specific instances that reflect the various ways in which students engaged in ethical reasoning on and through AI technologies to generate qualitative insights, rather than focusing on the identification of patterns through a quantitative analysis.

The second phase of the analysis included both inductive and deductive reasoning. Again, using meaning units as units of analysis, the data were now analysed in relation to Reiss (Citation2010) ethical thinking dimensions. Since this was a retrospective analysis, not all the dimensions proposed by Reiss (Citation2010) were applicable and some adaptation was needed. Therefore, a codebook for guiding the analysis was jointly developed with the co-authoring researchers through several iteration rounds (see the dimensions guiding the analysis in ). In the codebook, a list of tentative codes was produced considering the ethical traditions of consequentialism, deontology, virtue ethics and ethics of care, as well as other dimensions such as scope, perspective, interests considered, temporality, and knowledge (see Reiss Citation2010). The final codes (see ) were produced after a series of discussion and iterations among the researchers involved in this study. Once there was consensus on the final codes, the researchers involved in each of the cases proceeded to analyze all their case data. This analysis was shared with researchers from other cases, to find similarities and differences as well as to identify the selection of ethical challenges evidenced in teeangers’ discussion and design-based activities to include in the paper.

Table 3. Final codes used to analyze the research data analyzed in this study.

To increase the credibility and rigour of this retrospective analysis, we utilised both data and researcher triangulation as well as peer debriefing (see Lincoln and Guba Citation1985) aiming for a reflexive approach to the research process (Malterud Citation2001). Triangulation was case-specific and included analyzing multiple data types and involved two or more researchers in the analysis of each case. Peer debriefing was conducted both case-specifically and among all authors (researchers) and it included discussion on the case settings, researchers’ positions and analysis enabling reflection among the group about different perspectives and preconceptions.

4. Findings

4.1. Case 1: work practice week at the university

The participants in Case 1 were interviewed during their work practice week about their experiences and understandings of AI in everyday situations, specifically focusing on the use of AI in mobile applications and their conceptions of how AI might work in those apps, their feelings towards recommender systems, and their practices when searching for content and information through different apps and search engines. Moreover, AI-generated media, such as so-called deep fakes, and ideas for AI regulation and research were discussed.

The interview questions inquired about the teenagers’ own experiences, inviting reflection on the ethical challenges of AI mostly based on their prior experiences and conceptions. The participants approached AI ethical challenges from an individual perspective and thus, justifying their thinking using an egocentric approach. For example, the ethical challenges involved in AI being used to personalise content were recognised, most participants (n = 20) considered how this affected them personally in their daily practices. Yet, the scope was in many instances extended to peers, national, or global level, and the thinking reflected the ethical principles of social rules and reasoned principles as well. For instance, the ways in which AI tools might be used to track and surveil people were considered problematic by some (n = 9), both at a personal and global level, with various temporal impacts. Some participants (n = 6) were also mindful of how certain AI uses might accentuate digital and socio-economic divides, perceiving this as a national and global challenge, rather than a personal one. Although most participants (n = 13) were aware of the potentially problematic uses and negative impacts of AI, the short-term benefits appeared to outweigh the potential challenges. Many participants (n = 13) pointed out possible problems, but they seemed somewhat unconcerned about the risks for themselves even though some of the mentioned issues, such as addiction and socio-economic inequalities, might affect them personally in the long-term. These different ways of weighing AI benefits and harms connect with the ethical frameworks underlying students’ ethical thinking, which included consequentialism, hedonism, and deontology (see and ).

Table 4. Selection of case 1 ethical challenges with AI discussed by teenagers.

Table 5. Ethical dimensions identified in the selection of case 1 ethical challenges with AI discussed by teenagers.

When generating business ideas using AI for good, the teenagers adopted a broad scope, focusing on global challenges such as soil contamination due to non-biodegradable debris, the need of transport solutions using clean energies, and health hazards due to sedentary lifestyles as shown in the selected cases (see ). Despite teens’ focus on global issues, they did not mention how these issues affected them on a personal level (see for a summary of the ethical dimensions identified in case 1 teenagers’ designs). In all the business plans created during the work practice week (n = 7), the participants justified their choices based on reasoned principles, and in some cases (n = 4), they searched for additional information to back their approach with empirical data from existing studies and reports. In relation to the ethical frameworks underlying the students’ business designs, consequence utilitarianism was present in all of them. However, not all the cases focused on positive consequences in the same way. For instance, in the Dancing Robot the participants argued their solution may bring people pleasure and happiness, while in other cases such as the SuStainable Car the participants highlighted the environmental benefits their solution may bring. In some cases (n = 4), the participants engaged in ethical reflection using more than one ethical framework. In a few cases (n = 3), the Shark-Bot Diver being one of them, and took into consideration other interests than those of humans, including the interests of animals and natural ecosystems. Despite the business ideas being ambitious, the temporal dimension of all the designs (n = 7) was short-term since teenagers looked for short-middle-term consequences. Finally, while teens were introduced to various AI techniques throughout the week, in many cases (n = 5) they hardly elaborated on the AI techniques deployed in their designs. In this regard, it seems that when ideating their AI-based business participants relied on their prior existing knowledge rather than applying the knowledge they had developed during the week.

Table 6. Selection of case 1 teenagers’ AI designs.

Table 7. Ethical dimensions identified in the selection of case 1 teenagers’ AI designs.

When examining youth’s ethical reasoning in the interviews (a conversation style format) and in the business design workshop, some differences become noticeable. For instance, when responding to the interview questions the teenagers tended to adopt a broader scope taking into consideration different levels, ranging from personal to global. While in the design activities teens focused on sustainability challenges they cared about at personal level, in their presentations they tended to emphasise the global dimension, justifying the rationale of their innovations by appealing to reasoned principles. Also, although youth participants reflected on technology consequences, the ethical frameworks underlying teens’ inputs varied depending on the format. In the interviews, teenagers referred to technology consequences (in particular negative ones) and principles (especially when alluding to what they considered wrong practices), while in the design activity the teenagers highlighted their designs’ positive consequences and, in some cases, they even called for an ethics of care approach. Irrespective of the format (interview or design workshop), teenagers showed more consideration for human interests, and only in a few cases they took into consideration the interests of other beings. The temporal dimension considered by participants in their ethical reflections also varied between formats, with participants showing more concern for the long term in their interview inputs than in their business innovations. Finally, it is worth noting that although teens participating in the work practice week had been instructed on various AI techniques, in the interviews and the design activities they largely held onto their prior knowledge instead of applying new concepts they had learned during the week.

4.2. Case 2: critical AI literacy workshop with hands-on making

After engaging in design activities in a local FabLab, youth participants reflected on their designs with regards to previously known issues with AI-based technologies impacting the lives of children, specifically algorithmic-decision making as in the case of the UK school grades (Denes Citation2023) and the impact of social media on teens’ mental health. Participants focused their scope on personal and peers’ level, considering human interests in the long-term. They also built on their existing knowledge of social media and what was discussed in the workshops (see and ).

Table 8. Selection of case 2 ethical challenges with AI discussed by teenagers.

Table 9. Ethical dimensions identified in the selection of case 2 ethical challenges with AI discussed by teenagers.

With regards to algorithmic grading, the teenagers involved in the activities discussed ethical issues considering their impacts at a personal level and calling for the adoption of reasoned principles such as fairness and accountability to guide AI uses. For instance, groups mentioned that while AI-based decision-making might be based on data, such a process could feel invasive especially when monitoring socio-emotional data (n = 6), and the decisions may not be as nuanced or fair as those of a human-teacher (n = 8). Teenagers also highlighted the value of AI-powered systems at a personal level since in their view, these technologies enable engaging and motivating personalised learning content. However, one group of participants (n = 6) also imagined future human-machine hybrid teachers who would know everything – like Google – and also about their students including the students’ thoughts, highlighting potential privacy issues in an unbalanced knowledge relationship. With regards to the ethical frameworks, virtue ethics was often present in discussions on the strengths and weaknesses of human and AI-powered robot teachers since the participants strived to differentiate between human and AI qualities, indicating how these might impact teaching-related tasks such as facilitating learning and grading. When discussing controversial issues like emotional manipulation through social media, participants struggled to take a stand since they wanted to freely express themselves while recognising the need for codes of conduct that regulate online behaviour.

While groups were able to choose whether the focus of their design was a future teacher or friend, all groups decided to design a future teacher (see some examples of students’ designs in and ), revealing how technology is viewed for learning in a classroom context. Here too, the scope of the designs focused on teenagers’ personal and peers’ level, with one group (Jalmari) expanding the scope globally to include general expectations from future teachers to be knowledgeable. The groups’ designs considered the long-term interests of humans, with only one group (Robot 2.0) also considering the interests of AI-powered machines. All the designs incorporated some kind of machines such as robots, cyborgs, and AI, and students built on existing science-fiction imaginaries, such as robots cannot feel pain but can be self-aware.

Table 10. Selection of case 2 teenagers’ AI designs.

Table 11. Ethical dimensions identified in the selection of case 2 teenagers’ AI designs.

When presenting the designs, participants were able to combine various ethical frameworks (see ). For instance, one of the groups (n = 6) highlighted the positive consequences of their AI-powered teachers such as better supporting students making them engaged (Henkka), adopting a hedonistic perspective to point at their AI designs’ potential for reducing the burden of learning. Virtue ethics was also noticeable by another group (n = 6), which elaborated on the qualities of a good AI-powered future teacher – one that can answer all questions (Robot 2.0). Although teenagers’ designs explicitly referred to potential consequences stemming from technology use, they tended to assume that technology will enable solving all problems associated with teaching and learning without considering how the technical solutions would integrate in social environments such as schools.

Teenagers’ ethical thinking during different activities based on discussion and design around AI showed some similarities and differences. In both activities, the positive potential of (future) technology, such as personalised and engaging learning content, was highlighted. When thinking of possible consequences, participants mentioned various threats to one’s autonomy, including recommendation systems pushing for a consumerist agenda and future teachers predicting students’ behaviours based on the data collected. Throughout the sessions, all participants reflected on AI and human capabilities, as well as on social-emotional aspects trying to establish the qualities that make a good teacher.

During the discussions, participants (n = 8) got into nuanced and complex ethical reflections on social media uses and how to avoid negative unintended consequences, like for instance how to guarantee self-expression in online environments while finding some limits for preventing hateful and harassing content. However, in the designs, all participants assumed a techno-optimistic view that did not enable a deeper elaboration on the tensions and conflicts arising from the deployment of AI-powered teachers. This techno-optimism did not fade even though participants critically discussed technology’s social and ethical impacts.

4.3. Case 3: design activism workshop

In the beginning of the project, teenagers (n = 38) reflected on their background and assumptions concerning digital technologies including what they like and dislike about them, what positive and negative aspects they have, how they can be used for good and bad purposes, and how such technologies affect their lives. While the teenagers were invited to reflect on ethical challenges of digital technologies in general, some of the interventions explicitly referred to AI (see ).

Table 12. Selection of case 3 ethical challenges with AI discussed by teenagers.

With regards to technologies bringing entertainment or enjoyment, participants (n = 7) discussed ethical issues from the perspectives of egocentrism and reasoned principles (see ). They acknowledged entertainment value for themselves, but recognised the services are designed to prolong user engagement. With regards to technologies making people’s life easier on a more practical level, participants (n = 12) acknowledged technology’s potential to make life easier and more efficient, but noted people should not become too dependent on technology in their daily lives. As part of technology’s positive impacts, teenagers (n = 4) referred to social media enabling self-expression, while acknowledging design patterns to keep them posting through features such as likes, story views and number of followers. In their reflections, they also noticed how continuous social comparison can create insecurities.

Table 13. Ethical dimensions identified in the selection of case 3 ethical challenges with AI discussed by teenagers.

The most visible ethical frameworks in teens’ ethical reflections were the ones focusing on consequences, and positive consequences in particular (utilitarianism) (see ). The youth highlighted the positive aspects of technologies incorporating AI elements, while recognising the possible negative consequences. The interests considered were human-centred. Teenagers discussed AI’s impact at a personal level, by for instance providing them enjoyment or entertainment, as well as at global level, reflecting on potential drawbacks on a more general level. Concerning temporality, participants reflected on the immediate benefits, but recognised the potential of long-term problems. Finally, as the teenagers were not introduced to different AI techniques during the project, they based their reasoning on their existing knowledge, which drew from prior experiences and assumptions on AI.

When the students ideated and prototyped applications that combat bullying in schools, half of the produced designs (4 out of 8) included some level of AI to solve problematic situations or improve the status quo (see example designs in ). In the design ideation activities, the students took a broad scope and aimed at making their apps available globally since they understood bullying in school as a problem faced everywhere. In addition to helping those that are bullied, their designs also sought to help those watching from the sidelines and even the bullies, who might also need help. All groups (n = 4) justified their design choices related to AI components calling on reasoned principles and most (n = 3) on consequentialism (utilitarian approach) as well. AI was considered useful as it can keep track and evaluate previous situations better than humans, offering targeted advice and suggestions on how to proceed. The students acknowledged AI could also provide companionship and emotional support when a human is not available. In relation to the ethical frameworks underlying the app designs, a concern towards building relations based on care was visible in all the designs (n = 4). The apps focused on supporting human agency, both when seeking and providing help, and AI was also viewed as encouraging children to seek help from adults or suggesting ways forward, for example when it comes to seeking human help. As the task was to design apps that combat bullying in schools, human interests were considered in all designs. Concerning temporality, all apps (n = 4) aimed at tackling the situation at hand, as well as at providing long term changes e.g. by helping students track their moods or keeping track and analyzing the bullying situations at school. Finally, as the students were not introduced to different AI techniques during the project, all groups (n = 4) based their designs on their previous knowledge, experiences, and assumptions on AI (see ).

Table 14. Selection of Case 3 teenagers’ ethical AI designs.

Table 15. Ethical dimensions identified in the selection of Case 1 teenagers’ AI designs.

When examining youths’ ethical thinking in the surveys and in the design activities of Case 3, some differences become noticeable. Concerning the scope and perspective of their thinking, when the teenagers reflected on their own background and assumptions concerning digital technologies in the beginning, they often considered the possible negative sides of AI technologies from a global scope but addressed the potential benefits from a personal scope. While they in general weighed in on the pros and cons of technology from the perspective of reasoned principles, they considered the possible entertainment, educational etc. value for self from an egocentric perspective. Perhaps partly due to the complexity and glocal scale of the bullying issues they aimed to tackle. In the design tasks, they focused on finding solutions for the problem of bullying in a wider scope, justifying their design choices less through an egocentric perspective and more through appealing to reasoned principles. The ethical frameworks underlying teenagers’ inputs varied. It is noteworthy that while in the reflection tasks the teenagers referred to both the positive and negative technology consequences, in the design activities the negative consequences were not visible. However, perhaps partly because of these possible negative consequences the participants identified with AI technology early on in the reflection task, the groups kept a strong focus on human agency to trigger actions and provide help. In their designs the teenagers focused on positive outcomes such as those affected by bullying receiving peer support or adult support, or access to self-help resources, calling out for ethics of care approach. In both the reflective survey and design tasks teenagers only showed consideration for humans and showed concern for both the short term and long term. Finally, as AI technology in the interviews and the design activities they largely held onto their prior knowledge instead of applying new concepts they had learned during the week.

5. Discussion

The focus in the current study is on youth’s perspectives on AI and how these become visible through various activities and settings. Through our retrospective analysis of three cases involving 13-16-year-olds, we have been able to identify a variety of ways in which young people engaged in ethical reasoning through their encounters with AI. Next, we discuss the ethical challenges teenagers identified around AI and how they reflected upon them. We structure our discussion using Reiss (Citation2010) dimensions of ethical thinking, making bridges with existing literature. We also reflect on our adaptation of the framework, highlighting avenues for further development.

5.1. Ethical challenges around AI young people reflected upon

In the discussion-based activities of the three cases, participants reflected upon a variety of ethical challenges associated with AI. In all cases, AI uses fostering addiction, manipulation, and technological dependency as well as the spread of misinformation and hate were discussed. In some cases, participants also expressed concerns about the continuous surveillance and privacy invasions (case 1 and 2), the accentuation of divides in society (case 1), as well as dependency, social isolation, and toxic environments (case 3) enabled or accelerated by AI technologies.

In most of the discussions (cases 2 and 3), participants expressed their views on AI ethics making explicit references to everyday practices and experiences of AI. We consider this worth highlighting since the everyday dimension of AI is often overlooked (Elliot Citation2019), even though AI is becoming an everyday technology throughout the world (Daly Citation2022; Pink et al. Citation2022). As such, the findings emphasise the need to address everyday AI ethics with young people. Everyday ethics referring to ethical aspects intermingled in our everyday lives (see Van Mechelen et al. Citation2020), are increasingly linked to technology. In this study, we discuss everyday AI ethics as concerning AI technologies, but as emerging and unfolding in everyday life, entailing young people’s reasoning and contemplation of ethical issues associated with AI, happening in mundane situations without necessarily being connected with ‘doing ethics’, i.e. handling of ethically laden situations, or to formally ‘educating about ethics’. Such focus on the everyday enables a better understanding of ethical issues from a socio-technical perspective (Sartori and Bocca Citation2023). It also allows for people to see a space for agency in their daily lives, negotiating and contesting abuses and unethical deployments of technology (Daly Citation2022).

The challenges tackled by participants in the design activities revolved around issues close to their lives such as climate change, automation of teaching, and bullying. While design activities might allow for a better understanding of the inner workings of technology and help grounding the ethics discussions (Ali et al. Citation2019; DiPaola, Payne, and Breazeal Citation2020; Williams et al. Citation2022), they very easily led participants into a techno-solutionist approach, as we could observe in the three cases presented in this study. Techno-solutionist approaches have been critiqued for assuming that technology is the answer to all problems (Postman Citation2011), oversimplifying the issue (Morreale et al. Citation2020), rushing into answers without deeply questioning and understanding the problems (Dobbins Citation2011), and advocating solutions to address deficiencies that are not such (Morozov Citation2013). More recently, scholars have also blamed techno-solutions of following an extractivist logic (Pink Citation2022), and even challenging democratic values (Haven and Boyd Citation2020). While this does not mean design activities do not have a great potential to support ethical thinking, based on our findings we claim that technology design activities, especially those involving young people in learning environments, need to have a specific focus on ethics. This aligns with an increasing number of voices advocating for the adoption of an ethics-first approach (Kaspersen et al. Citation2022), embedded ethics (Grosz et al. Citation2019; Williams et al. Citation2022), designing with ethics in mind (Ali et al. Citation2019) in AI design activities in learning contexts.

5.2. Ethical thinking dimensions identified in young people’s discussions and designs

In this study, we have built on Reiss (Citation2010) dimensions of ethical thinking to understand how the young people taking part in the case activities engaged in ethical reasoning. Next, we discuss Reiss (Citation2010) dimensions in relation to our findings and the existing literature.

When reflecting on the ethical challenges associated with AI, the teens participating in the cases considered various scopes ranging from individual to global (see and ). Attention to the global scope of AI ethical challenges was present in all the cases. Depending on the case, participants also reflected on the impacts of AI technology use at personal, and their peers’ level. One example of this can be found in case 2, where teenagers discussed the negative consequences of being the target of hate speech fuelled by social media algorithms, while recognising the importance of guaranteeing freedom of speech. In all the cases, most of the ethical issues highlighted by participants related to social sustainability (apart from case 1 designs that also tackled environmental issues). Considering that sustainability issues have been described as wicked problems due to their complexity and interrelations at many levels (Lönngren and Van Poeck Citation2021; Pryshlakivsky and Searcy Citation2013), it would be valuable to encourage young people to look at the interrelation between the micro, meso and macro impacts of such challenges. Also, considering that in all the cases participants tackled complex problems relating to socio-technical systems, youth's ethical thinking might benefit from guidance to consider issues from a systemic perspective (Curwen et al. Citation2018; Hofman-Bergholm Citation2018; Uehara Citation2020) and thus, develop a deeper and fine-grained understanding of the scope of the ethical issues under discussion.

Table 16. Summary of ethical thinking dimensions evidenced in young people’s discussions.

Table 17. Summary of ethical thinking dimensions evidenced in young people’s designs.

The perspectives adopted by participants when thinking on AI challenges consisted of egocentrism and reasoned principles, the latter being predominant in cases 1 and 3. Based on the analysis of teenagers’ contributions, it seems that the centre of their moral universe revolves around their own worldviews and principles (see and ). When participants reasoned their position on the basis of principles, they drew on the liberal tradition (see for instance the ethical challenges around freedom of speech in case 2 and 3) and the defense of human rights such as the respect of personal privacy and integrity (see case 1 and 2). As this likely reflects the framing of the activities (as will be discussed later in more detail), it would be worth paying attention to encouraging young people to critically reflect on the situated nature of such principles and the importance of acknowledging diverse worldviews. In this regard we advocate to extend Reiss’ dimensions of ethical thinking (Citation2010) to go beyond reasoned principles and include another indicator of progression referring to young people’s ability to embrace a pluralistic perspective in their ethical thinking. For this, we believe it is necessary to question eurocentrism, broadening perceptions to build a world in which many worlds would coexist (De la Cadena and Blaser Citation2018; Mignolo Citation2018).

When reflecting on the ethical challenges, participants tended to focus on the consequences. Irrespective of the activity type, consequentialism was the ethical framework most evidenced in participants’ discussions and designs. It is worth noticing that participants were able to elaborate their thoughts using one or two ethical frameworks in most of the cases (see and ). Only in one case (C3.4), we could observe evidence of three frameworks in participants’ reflections. In our view, the participants’ ability to use various ethical frameworks was not influenced by the activity format, but more by the design brief and the complexity of the issue at hand (for instance, in case 3 where activities focused on bullying the participants used more frameworks than in the other cases). Since our study did not use a systematic comparative approach, we need caution when comparing discussion versus design-based activities. This being said, we did notice some differences in the type of frameworks deployed by the students depending on the type of activity. In the discussion-based activities (case 1, 2 and 3), the teenagers engaged in ethical thinking using a consequentialist approach, but also from a deontological, and virtue ethics perspective (see ). In this sense, we consider that teenagers’ views resonate with current general media discourses emphasising AI consequences, and unintended consequences (see Ouchchy, Coin, and Dubljević Citation2020). In design activities, consequentialism was also present, but in this case, participants tended to focus on positive consequences, which is characteristic of utilitarianism. Other ethical frameworks evidenced in participants ethical thinking through design were hedonism and virtue ethics (see ).

The topic under discussion, its framing as well as the type of activity the participants engaged in all appeared to have an impact on the ethical frameworks alluded by participants. The discussion activities appeared to invite consideration of negative and the longer-term consequences of AI while design activities lead to shorter term considerations and to emphasis on positive consequences of their design (see and ). For instance, in case 2 where the emphasis was on envisioning a future teacher or friend, virtue ethics become explicit whereas in case 3, focused on bullying, the participants’ thinking revolved around the notion of care. Although recent work in human computer interaction has called for further integration of existing frameworks of design ethics with virtue ethics (Farina et al. Citation2022; Gorichanaz Citation2022), to the best of our knowledge there is not extensive work on virtue ethics in AI education activities involving children and young people. We believe this is an interesting avenue to further explore in future research studies. Recent studies have explored how to support an ethic of care among children and young people from various perspectives such as Silvis et al.’s work’s (Citation2022) call for fostering an ethic of care towards technological things as a way to support sustainable socio-ecological relations, and O’Reilly et al’s (Citation2021) exploration of a ‘digital ethics of care’ among young people in social media. Ethics of care approaches have also been elaborated in literature on AI technologies as a way to address the complexity of socio-technical relations (see for instance Lagerkvist et al. Citation2022 and Cohn Citation2020). Based on our analysis of the cases reported in this study, we advocate for further work to cultivate ethics of care among young people in relation to everyday AI ethics.

According to Reiss (Citation2010), the range of interests considered in ethical thinking might range from those of humans only, to all sentient animals, and to whole ecosystems. In this study, irrespective of the activity in all the cases the predominant interests considered by the participants were those of humans (see and ). During the last decades, anthropocentric worldviews have been strongly critiqued since they ignore other needs than those of humans and are at the root of the ecological crisis that current societies are facing (Parikka Citation2014; Zalasiewicz et al. Citation2008). In response to this crisis, some voices advocate for embracing non-anthropocentric ethics that consider other values than those of humans as a way to cultivate more-than-human relations (Tironi et al. Citation2023). Non-anthropocentric ethics are strongly related to sustainability discourses. Given the increasing emphasis in research and education activities to address sustainability challenges (see for instance UNESCO SDGFootnote1), we consider it crucial to support young people develop non-anthropocentric ethics when engaging in discussion and design activities focused on AI technologies. Some examples of recent research exploring non-anthropocentric worldviews with children and young people have used strategies such as interspecies communication (French et al. Citation2021), augmented storying (Kumpulainen et al. Citation2022), human de-centering (Quay Citation2021), creating contact zones for more-than-human design collaborations (Prost et al. Citation2021), arts-based methods for designing more-than human cities (Wolff et al. Citation2021).

The temporal horizon considered by participants when reflecting on AI was quite varied. For instance, in case 1, there was a slightly more presence of short-term impacts, while case 2 was explicitly focused on futures. In conversation-based activities, participants made explicit references to long-term impacts (see ). In the design activities, participants aimed to change the current state of things through their designs although they did not specify a temporal horizon explicitly. Design activities have been associated with futures-making since design solutions imply a particular worldview, fostering certain behaviours and values (Ehn, Nilsson, and Topgaard Citation2014; Yelavich and Adams Citation2014). Thus, although design activities are future-oriented, we need to be cautious about the temporal horizon participants considered. We call for further research to understand how young people perceive design potential for generating future(s). An explicit focus on futures through activities fostering futures thinking might contribute to broadening young people’s reflections temporal horizon since thinking about the future involves appreciating the past(s), understanding the present(s) and engaging in forecasting of potential futures, envisioning and experiencing alternative ones, as well as creating them (Dator Citation2019). From a design perspective, we want to point out the growing research corpus using futures approaches with youth such as design futuring (Iivari et al. Citation2021; Sharma et al. Citation2022a; Tamashiro et al. Citation2021; Ventä-Olkkonen et al. Citation2021) and speculative approaches (Khan et al. Citation2021; Rousell, Cutter-Mackenzie, and Foster Citation2017; Wargo and Alvarado Citation2020).

Reiss (Citation2010) framework also includes a dimension on ‘knowledge’, to assess the type of knowledge underlying young people’s ethical reflections. As indicators of progression, the scale proposed by Reiss ranges from using existing knowledge only at the lower end, to using taught knowledge to research new knowledge at the more advanced levels. As mentioned earlier, in the cases included in this study the participants did not receive any explicit instruction on ethical thinking in relation to technology. Thus, we are not surprised that in most of the cases, the participants relied on prior knowledge when thinking about AI ethics (see and ). This can be especially observed in the case of discussion-based activities (see ). In the design activities (case 1 and 2), the participants drew from taught and new knowledge about societal challenges, sustainability, and technology to develop and justify their designs. Considering our research design, we cannot draw strong conclusions on this dimension. However, we want to draw attention to recent studies showing that embedding ethics in technology courses fosters students’ holistic thinking on the societal implications of technology (Skirpan et al. Citation2018; Zhang et al. Citation2022).

As noted, we identified some differences between the cases as well as between the activities in terms of the ethical reasoning they encouraged the participants to engage in. Based on our observations, some pedagogical implications can be identified. First, the combination of design and discussion approaches may be valuable as it appeared that the design activities led to short-term considerations and an emphasis on positive consequences while the discussion activities invited the consideration of the negative and long-term consequences of AI. Second, the task assignment and topic of the design task need to be carefully considered as they shape the ethical considerations and their scope. For instance, in some of our cases the discussion task was framed to address personal experiences, which naturally led to an egocentric focus. The design tasks, nevertheless, seemed to invite children to also consider the perspectives of others (Bosch, Härkki, and Seitamaa-Hakkarainen Citation2022; Dawbin et al. Citation2021). We are somewhat disappointed to see that the children focused heavily on humans, as only one example considering animals and ecosystems, even if one of our cases had sustainability as its focus. The anti-bullying design topic invited ethics of care considerations among children and the future teacher case virtue ethics ones. Hence, with their topic and activity selections, educators can have an impact on the kind of ethics considerations that emerge. One noteworthy observation is that our cases did not include explicit design ethics education with children, due to which it is not surprising that we could not identify considerable changes in their discussion and design task considerations and in the knowledge the participants drew from.

5.3. Limitations and further research

This study adopted a retrospective approach which comes with limitations. Importantly, when designing the cases, we did not plan them to specifically focus on the ethical reflections of young people. Instead, the analysis of the cases highlighted these reflections that emerged as part of the different activities. While this may be considered as a limitation, it also allowed us to examine ethical reflections in a more naturalistic setting, as part of other discussions and hands-on work. To inform future pedagogical designs further work is needed to examine the insights of this study concerning the ways the various contextual elements in a setting, including the activity type, the topic and its framing, the physical space, and the participants, together shape the ethical reasoning of participants.

All cases were conducted in the Global North, specifically in one country (Finland) and one city, and within the intersection of formal education and design/making education. A Finnish classroom is a unique environment, and learning environments can vary greatly between countries. Given the nature of this qualitative study, the number of participants is limited. Thus, to strengthen our claims we consider necessary conducting further research in other cultural contexts with higher number of participants.

In this study, we selected an existing ethical framework (Reiss Citation2010) and used it as the foundation for data analysis. While we consider Reiss (Citation2010) Framework to be the most suitable for our study, we also noticed some limitations. The main shortcoming we noticed was assessing students’ level of ethical thinking based on the number of ethical frameworks evidenced. While we don’t disagree with the author, we consider that certain ethical frameworks open for more complex thinking than others. Also, when analysing the temporal dimension, Reiss (Citation2010) Framework seems to implicitly adopt a consequentialist approach since the focus is on consequences. Thus, if participants follow other ethical frameworks such as virtue ethics or ethics of care, assessing the temporal scope becomes challenging.

Finally, in our study the participants were primarily positioned as research subjects, not co-researchers. Future investigations could leverage our findings to delve deeper into young people’s ethical thinking concerning digital technology. This could be achieved, for instance, by empowering the youth and positioning them as co-researchers instead of merely research subjects.

6. Conclusion

With this study, we contribute to the technology ethics education and CCI research by exploring young people’s reflections about ethical issues around AI. We propose an adaptation of Reiss (Citation2010) dimensions of ethical thinking that we deploy to analyze three cases in which young people engaged in discussion and design activities involving AI. Our cases show how teenagers reflect on AI ethics, even without dedicated teaching on it and are able to consider various dimensions and make connections to everyday uses of AI raising ethical issues. In our analysis we elaborate on how different types of framings and activities shape various dimensions of young people’s ethical thinking. Based on our findings, we align with recent studies advocating for an integrated approach of AI and ethics in learning activities about technology. We consider our adaptation of Reiss (Citation2010) dimensions of ethical thinking a valuable contribution that helps advance research and practice in the field of CCI and technology ethics education.

Acknowledgements

We thank the participants and educators involved in the cases analysed in this study for their valuable contributions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

The study has been supported by the GenZ project, a strategic profiling project in human sciences at the University of Oulu, funded by the Academy of Finland (Grant #318930) and the University of Oulu, Finland. This research is also connected to the Academy of Finland funded projects Participatory AI with Schoolchildren (Grant #340603), Make-a-Difference (Grant #324685), Critical DataLit (Grant #354445) and FutuProta - Making Sense of ‘Technology Protagonism’ (Grant #351584).

Notes

References

  • Ali, S., D. DiPaola, I. Lee, V. Sindato, G. Kim, R. Blumofe, and C. Breazeal. 2021. “Children as Creators, Thinkers and Citizens in an AI-Driven Future.” Computers and Education: Artificial Intelligence 2: 100040. https://doi.org/10.1016/j.caeai.2021.100040.
  • Ali, S., B. H. Payne, R. Williams, H. W. Park, and C. Breazeal. 2019, June. “Constructionism, Ethics, and Creativity: Developing Primary and Middle School Artificial Intelligence Education.” In International workshop on education in artificial intelligence k-12 (eduai’19) (Vol. 2, 1–4).
  • Antle, A. N., C. Frauenberger, M. Landoni, J. A. Fails, M. Jirotka, H. Webb, and N. Tutiyaphuengprasert. 2020, June. “Emergent, Situated and Prospective Ethics for Child-Computer Interaction Research.” In Proceedings of the 2020 ACM Interaction Design and Children Conference: Extended Abstracts, 54–61. https://doi.org/10.1145/3397617.3398058.
  • Antle, A. N., and A. Kitson. 2021. “1, 2, 3, 4 Tell Me How to Grow More: A Position Paper on Children, Design Ethics and Biowearables.” International Journal of Child-Computer Interaction 30: 100328. https://doi.org/10.1016/j.ijcci.2021.100328.
  • Antle, A. N., Y. Murai, A. Kitson, Y. Candau, Z. M. T. Dao-Kroeker, and A. Adibi. 2022, June. “There are a LOT of Moral Issues With Biowearables … Teaching Design Ethics Through a Critical Making Biowearable Workshop.” Interaction Design and Children, 327–340. https://doi.org/10.1145/3501712.3529717.
  • Arista, N., S. Costanza-Chock, V. Ghazavi, and S. Kite. 2021. Against Reduction: Designing a Human Future with Machines. Cambridge, MA: MIT Press.
  • Ashok, M., R. Madan, A. Joha, and U. Sivarajah. 2022. “Ethical Framework for Artificial Intelligence and Digital Technologies.” International Journal of Information Management 62: 102433.
  • Association for Learning Technology. 2021. Framework for Ethical Learning Technology (FELT). Retrieved March 21 2024 from https://www.alt.ac.uk/about-alt/what-we-do/alts-ethical-framework-learning-technology.
  • Beauchamp, T., and J. Childress. 1983. Principles of Biomedical Ethics. New York: Oxford University Press.
  • Beauchamp, T., and J. Childress. 2008. Principles of Biomedical Ethics. New York: Oxford University Press.
  • Bilstrup, K. E. K., M. H. Kaspersen, and M. G. Petersen. 2020, July. “Staging Reflections on Ethical Dilemmas in Machine Learning: A Card-Based Design Workshop for High School Students.” In Proceedings of the 2020 ACM Designing Interactive Systems Conference, 1211–1222.
  • Bosch, N., T. Härkki, and P. Seitamaa-Hakkarainen. 2022. “Design Empathy in Students’ Participatory Design Processes.” Design and Technology Education: An International Journal 27 (1): 29–48.
  • Boydston, J. A., A. Sharpe, H. Furst Simon, and B. Levine. eds. 2008. John Dewey: The Collected Works of John Dewey, 1882-1953 (Vols. 1–37). Carbondale, IL: Southern Illinois University Press.
  • Briggle, A., J. B. Holbrook, J. Oppong, J. Hoffmann, E. K. Larsen, and P. Pluscht. 2016. “Research Ethics Education in the STEM Disciplines: The Promises and Challenges of a Gaming Approach.” Science and Engineering Ethics 22: 237–250. https://doi.org/10.1007/s11948-015-9624-6.
  • Brink, D. O. 2007. “Some Forms and Limits of Consequentialism.” In The Oxford Handbook of Ethical Theory, edited by D. Copp. Oxford: Clarendon Press. https://doi.org/10.1093/oxfordhb/9780195325911.003.0015
  • Buntting, C., and B. Ryan. 2010. “In the Classroom: Exploring Ethical Issues With Young Pupils.” In Ethics in the Science and Technology Classroom: A New Approach to Teaching and Learning, edited by A. Jones, A. McKim, and M. Reiss, 37–54. Rotterdam: Sense Publishers. https://doi.org/10.1163/9789460910715_005
  • Cagiltay, B., H. R. Ho, J. E. Michaelis, and B. Mutlu. 2020, June. “Investigating Family Perceptions and Design Preferences For an In-Home Robot.” In Proceedings of the Interaction Design and Children Conference, 229–242. https://doi.org/10.1145/3392063.3394411.
  • Chan, J. K. 2018. “Design Ethics: Reflecting on the Ethical Dimensions of Technology, Sustainability, and Responsibility in the Anthropocene.” Design Studies 54: 184–200. https://doi.org/10.1016/j.destud.2017.09.005.
  • Cohn, J. 2020. “In a Different Code: Artificial Intelligence and the Ethics of Care.” The International Review of Information Ethics 28. https://doi.org/10.29173/irie383.
  • Constantin, A., V. Andries, J. Korte, C. A. Alexandru, J. Good, G. Sim, and E. Eriksson. 2022, June. “Ethical Considerations of Distributed Participatory Design with Children.” Interaction Design and Children, 700–702. https://doi.org/10.1145/3501712.3536386.
  • Curwen, M. S., A. Ardell, L. MacGillivray, and R. Lambert. 2018, November. “Systems Thinking in a Second Grade Curriculum: Students Engaged to Address a Statewide Drought.” Frontiers in Education 3. https://doi.org/10.3389/feduc.2018.00090.
  • Daly, A. 2022. “Everyday AI Ethics: From the Global to Local Through Facial Recognition.” In Economies of Virtue: The Circulation of ‘Ethics’ in AI, edited by T. Phan, J. Goldenfein, D. Kuch, and M. Mann, 83–103. Amsterdam: Institute of Network Cultures.
  • Dator, J. 2019. “Alternative Futures at the Manoa School.” Journal of Futures Studies 14 (2): 1–18. https://doi.org/10.1007/978-3-030-17387-6_5.
  • Dawbin, B., M. Sherwen, S. Dean, S. Donnelly, and R. Cant. 2021. “Building Empathy Through a Design Thinking Project: A Case Study with Middle Secondary Schoolboys.” Issues in Educational Research 31 (2): 440–457.
  • De la Cadena, M., and M. Blaser, eds. 2018. A World of Many Worlds. Durham: Duke University Press..
  • DeLuca, R. 2010. “Using Narrative for Ethical Thinking.” In Ethics in the Science and Technology Classroom: A New Approach to Teaching and Learning, edited by A. Jones, A. McKim, and M. Reiss, 87–102. Rotterdam: Sense Publishers.
  • Denes, G. 2023. “A Case Study of Using AI for General Certificate of Secondary Education (GCSE) Grade Prediction in a Selective Independent School in England.” Computers and Education: Artificial Intelligence 4: 100129. https://doi.org/10.1016/j.caeai.2023.100129.
  • DiPaola, D., B. H. Payne, and C. Breazeal. 2020, June. “Decoding Design Agendas: An Ethical Design Activity for Middle School Students.” In Proceedings of the Interaction Design and Children Conference, 1–10.
  • Dobbins, M. 2011. Urban Design and People. Hoboken, NJ: John Wiley & Sons.
  • Eglash, R., M. Lachney, W. Babbitt, A. Bennett, M. Reinhardt, and J. Davis. 2020. “Decolonizing Education with Anishinaabe Arcs: Generative STEM as a Path to Indigenous Futurity.” Educational Technology Research and Development 68 (3): 1569–1593. https://doi.org/10.1007/s11423-019-09728-6.
  • Ehn, P., E. M. Nilsson, and R. Topgaard. 2014. Making Futures: Marginal Notes on Innovation, Design, and Democracy. Cambridge, MA: The MIT Press. https://doi.org/10.7551/mitpress/9874.001.0001.
  • Elliot, A. 2019. The Culture of AI: Everyday Life and the Digital Revolution. London, UK: Routledge.
  • European Commission. 2019. Ethics Guidelines for Trustworthy AI. Accessed 21 March 2024. https://ec.europa.eu/digital-single-market/en/news/ethicsguidelines-trustworthy-ai.
  • Farina, M., P. Zhdanov, A. Karimov, and A. Lavazza. 2022. “AI and Society: A Virtue Ethics Approach.” AI & Society, 1–14. https://doi.org/10.1007/s00146-022-01545-5.
  • Flinders, D. J. 1992. “In Search of Ethical Guidance: Constructing a Basis for Dialogue.” Qualitative Studies in Education 5: 101–115. https://doi.org/10.1080/0951839920050202.
  • Floridi, L. 2018. “Soft Ethics, the Governance of the Digital and the General Data Protection Regulation.” Philosophical Transactions Series A Mathematical, Physical, and Engineering Sciences 376 (2133): 20180081. https://doi.org/10.1098/rsta.2018.0081.
  • Franzke, A. S., A. Bechmann, M. Zimmer, and C. Ess. 2020. Internet Research: Ethical Guidelines 3.0. Accessed March 21 2024 from https://aoir.org/reports/ethics3.pdf.
  • Frauenberger, C., A. N. Antle, M. Landoni, J. C. Read, and J. A. Fails. 2018, June. “Ethics in Interaction Design and Children: A Panel and Community Dialogue.” In Proceedings of the 17th ACM Conference on Interaction Design and Children, 748–752.
  • French, F., I. Hirskyj-Douglas, H. Väätäjä, P. Pons, S. Karl, Y. Chisik, and D. Vilker. 2021, June. “Ethics and Power Dynamics in Playful Technology for Animals: Using Speculative Design to Provoke Reflection.” In Proceedings of the 24th International Academic Mindtrek Conference, 91–101. https://doi.org/10.1145/3464327.3464366.
  • Gisewhite, R. A. 2023. “A Whale of a Time: Engaging in a War of Values for Youth Activism in Science Education.” Cultural Studies of Science Education, 1–25. https://doi.org/10.1007/s11422-022-10140-5.
  • Gorichanaz, T. 2022. Designing a Future Worth Wanting: Applying Virtue Ethics to HCI. arXiv preprint arXiv:2204.02237.
  • Gorodnichenko, Y., T. Pham, and O. Talavera. 2021. “Social Media, Sentiment and Public Opinions: Evidence from# Brexit and# USElection.” European Economic Review 136: 103772. https://doi.org/10.1016/j.euroecorev.2021.103772.
  • Grammenos, D. 2016, June. “roboTwin: A Modular & Evolvable Robotic Companion for Children.” In Proceedings of the 15th International Conference on Interaction Design and Children, 742–744.
  • Grosz, B. J., D. G. Grant, K. Vredenburgh, J. Behrends, L. Hu, A. Simmons, and J. Waldo. 2019. “Embedded EthiCS: Integrating Ethics Across CS Education.” Communications of the ACM 62 (8): 54–61. https://doi.org/10.1145/3330794.
  • Haven, J., and D. Boyd. 2020. Philanthropy’s Techno-Solutionism Problem. New York: Data & Society Research Institute.
  • Hofman-Bergholm, M. 2018. “Could Education for Sustainable Development Benefit from a Systems Thinking Approach?” Systems 6 (4): 43. https://doi.org/10.3390/systems6040043.
  • Holmes, W., K. Porayska-Pomsta, K. Holstein, E. Sutherland, T. Baker, S. B. Shum, … and K. R. Koedinger. 2021. “Ethics of AI in Education: Towards a Community-Wide Framework.” International Journal of Artificial Intelligence in Education 32: 542–570. https://doi.org/10.1007/s40593-021-00239-1.
  • Holmes, W., and I. Tuomi. 2022. “State of the Art and Practice in AI in Education.” European Journal of Education 57 (4): 542–570. https://doi.org/10.1111/ejed.12533.
  • Hourcade, J. P., M. Alper, A. N. Antle, G. E. Baykal, E. Bonsignore, T. Clegg, and J. Yip. 2023, April. “Developing Participatory Methods to Consider the Ethics of Emerging Technologies for Children.” In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 1–3. https://doi.org/10.1145/3544549.3583172.
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. 2017. Ethically Aligned Design: A Version for Prioritizing Human Well-Being with Autonomous and Intelligent Systems, Version 2. IEEE. https://doi.org/10.1007/978-3-030-12524-0_2.
  • Iivari, N., L. Ventä-Olkkonen, S. Sharma, T. Molin-Juustila, and E. Kinnunen. 2021, May. “CHI Against Bullying: Taking Stock of the Past and Envisioning the Future.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3411764.3445282.
  • Kafai, Y., G. Jayathirtha, M. Shaw, and L. Morales-Navarro. 2021, June. “Codequilt: Designing an Hour of Code Activity for Creative and Critical Engagement with Computing.” Interaction Design and Children, 573–576. https://doi.org/10.1145/3459990.3465187.
  • Kafai, Y. B., and K. A. Peppler. 2011. “Youth, Technology, and DIY: Developing Participatory Competencies in Creative Media Production.” Review of Research in Education 35 (1): 89–119. https://doi.org/10.3102/0091732X10383211.
  • Kaspersen, M. H., K. E. K. Bilstrup, M. Van Mechelen, A. Hjort, N. O. Bouvin, and M. G. Petersen. 2022. “High School Students Exploring Machine Learning and its Societal Implications: Opportunities and Challenges.” International Journal of Child-Computer Interaction, 100539. https://doi.org/10.1016/j.ijcci.2022.100539.
  • Khalil, A., S. G. Ahmed, A. M. Khattak, and N. Al-Qirim. 2020. “Investigating Bias in Facial Analysis Systems: A Systematic Review.” IEEE Access 8: 130751–130761. https://doi.org/10.1109/Access.2020.3006051.
  • Khan, A. H., N. Ejaz, S. Matthews, S. Snow, and B. Matthews. 2021, June. “Speculative Design for Education: Using Participatory Methods to Map Design Challenges and Opportunities in Pakistan.” In Proceedings of the 2021 ACM Designing Interactive Systems Conference, 1748–1764. https://doi.org/10.1145/3461778.3462117.
  • Kibert, C. J., M. C. Monroe, A. L. Peterson, R. R. Plate, and L. P. Thiele. 2012. Working Toward Sustainability: Ethical Decision Making in a Technological World. Hoboken, NJ: Wiley.
  • Ko, A. J., A. Oleson, N. Ryan, Y. Register, B. Xie, M. Tari, M. Davidson, S. Druga, and D. Loksa. 2020. “It is Time for More Critical CS Education.” Communications of the ACM 63 (11): 31–33. https://doi.org/10.1145/3424000.
  • Kohlberg, L. 1975. “The Cognitive-Developmental Approach to Moral Education.” Phi Delta Kappan 56: 670–677.
  • Kumpulainen, K., J. Renlund, J. Byman, and C. C. Wong. 2022. “Empathetic Encounters of Children’s Augmented Storying Across the Human and More-Than-Human Worlds.” International Studies in Sociology of Education 31 (1-2): 208–230. https://doi.org/10.1080/09620214.2021.1916400.
  • Lagerkvist, A., M. Tudor, J. Smolicki, C. M. Ess, J. Eriksson Lundström, and M. Rogg. 2022. “Body Stakes: An Existential Ethics of Care in Living With Biometrics and AI.” AI & Society, 1–13. https://doi.org/10.1007/s00146-022-01550-8.
  • Lambrecht, A., and C. Tucker. 2019. “Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads.” Management science 65 (7): 2966–2981. https://doi.org/10.2139/ssrn.2852260.
  • Larrabee, M. J. 2016. An ethic of Care: Feminist and Interdisciplinary Perspectives. Routledge. https://doi.org/10.2307/3341989
  • Lee, C. H., N. Gobir, A. Gurn, and E. Soep. 2022. “In the Black Mirror: Youth Investigations into Artificial Intelligence.” ACM Transactions on Computing Education 22 (3): 1–25. https://doi.org/10.1145/3484495.
  • Levick-Parkin, M., E. Stirling, M. Hanson, and R. Bateman. 2021. “Beyond Speculation: Using Speculative Methods to Surface Ethics and Positionality in Design Practice and Pedagogy.” Global Discourse 11 (1-2): 193–214. https://doi.org/10.1332/204378920(16055409420649.
  • Lincoln, Y. S., and E. G. Guba. 1985. Naturalistic Inquiry. Newbury Park, CA: Sage Publications. https://doi.org/10.1016/0147-1767(85)90062-8
  • Lönngren, J., and K. Van Poeck. 2021. “Wicked Problems: A Mapping Review of the Literature.” International Journal of Sustainable Development & World Ecology 28 (6): 481–502. https://doi.org/10.1080/13504509.2020.1859415.
  • Malterud, K. 2001. “Qualitative Research: Standards, Challenges, and Guidelines.” The Lancet 358 (9280): 483–488. https://doi.org/10.1016/S0140-6736(01)05627-6.
  • McGee, E. O., and L. Bentley. 2017. “The Troubled Success of Black Women in STEM.” Cognition and Instruction 35 (4): 265–289. https://doi.org/10.1080/07370008.2017.1355211.
  • McKim, A. 2010. “Bioethics Education.” In Ethics in the Science and Technology Classroom: A New Approach to Teaching and Learning, edited by A. Jones, A. McKim, and M. Reiss, 19–36. Rotterdam: Sense Publishers.
  • McNally, B., M. L. Guha, M. L. Mauriello, and A. Druin. 2016, May. “Children’s Perspectives on Ethical Issues Surrounding Their Past Involvement on a Participatory Design Team.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 3595–3606. https://doi.org/10.1145/2858036.2858338.
  • Mignolo, W. 2018. “Forward: On Pluriversality and Multipolarity.” In Constructing the Pluriverse: The Geopolitics of Knowledge, edited by B. Reiter, ix–xvi. Durham: Duke University Press.
  • Morozov, E. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs.
  • Morreale, F., S. M. A. Bin, A. McPherson, P. Stapleton, and M. Wanderley. 2020. “A NIME of the Times: Developing an Outward-Looking Political Agenda For This Community.” Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, 160–165. https://doi.org/10.5281/zenodo.4813294.
  • Mortari, L., and M. Ubbiali. 2017. “The “MelArete” Project: Educating Children to the Ethics of Virtue and of Care.” European Journal of Educational Research 6 (3): 269–278. https://doi.org/10.12973/eu-jer.6.3.269.
  • O’Reilly, M., D. Levine, and E. Law. 2021. “Applying a ‘Digital Ethics of Care’ Philosophy to Understand Adolescents’ Sense of Responsibility on Social Media.” Pastoral Care in Education 39 (2): 91–107. https://doi.org/10.1080/02643944.2020.1774635.
  • Ouchchy, L., A. Coin, and V. Dubljević. 2020. “AI in the Headlines: The Portrayal of the Ethical Issues of Artificial Intelligence in the Media.” AI & Society 35: 927–936. https://doi.org/10.1007/s00146-020-00965-5.
  • Parikka, J. 2014. The Anthrobscene. U of Minnesota Press. https://doi.org/10.5749/9781452958521
  • Pavarini, G., S. Alí, L. George, L. Kariyawasam, J. Lorimer, N. Tomat, and I. Singh. 2020, June. “Gamifying Bioethics: A Case Study of Co-Designing Empirical Tools with Adolescents.” In Proceedings of the 2020 ACM Interaction Design and Children Conference: Extended Abstracts, 320–325.
  • Pink, S. 2022. “Trust, Ethics and Automation: Anticipatory Imaginaries in Everyday Life.” In Everyday Automation, 44–58. New York: Routledge.
  • Pink, S., M. Ruckenstein, M. Berg, and D. Lupton. 2022. “Everyday Automation: Setting a Research Agenda.” In Everyday Automation: Experiencing and Anticipating Emerging Technologies, edited by S. Pink, M. Berg, D. Lupton, and M. Ruckenstein, 1–19. New York: Taylor & Francis.
  • Pinkard, N., S. Erete, C. K. Martin, and M. McKinney de Royston. 2017. “Digital Youth Divas: Exploring Narrative-Driven Curriculum to Spark Middle School Girls’ Interest in Computational Activities.” Journal of the Learning Sciences 26 (3): 477–516. https://doi.org/10.1080/10508406.2017.1307199.
  • Postman, N. 2011. Technopoly: The Surrender of Culture to Technology. New York: Vintage.
  • Prost, S., I. Pavlovskaya, K. Meziant, V. Vlachokyriakos, and C. Crivellaro. 2021. “Contact Zones: Designing for More-than-Human Food Relations.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW1): 1–24. https://doi.org/10.1145/3449121.
  • Pryshlakivsky, J., and C. Searcy. 2013. “Sustainable Development as a Wicked Problem.” In Managing and Engineering in Complex Situations, edited by S. F. Kovacic and A. Sousa-Poza, Vol. 21, 109–128. New York: Springer Science & Business Media. https://doi.org/10.1007/978-94-007-5515-4_6.
  • Quay, J. 2021. “Wild and Willful Pedagogies: Education Policy and Practice to Embrace the Spirits of a More-Than-Human World.” Policy Futures in Education 19 (3): 291–306. https://doi.org/10.1177/1478210320956875.
  • Raji, I. D., M. K. Scheuerman, and R. Amironesei. 2021, March. “You Can’t Sit With Us: Exclusionary Pedagogy in AI Ethics Education.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 515–525.
  • Ratto, M. 2011. “Critical Making: Conceptual and Material Studies in Technology and Social Life.” The Information Society 27 (4): 252–260. https://doi.org/10.1080/01972243.2011.583819.
  • Read, J. C., M. Horton, G. Sim, P. Gregory, D. Fitton, and B. Cassidy. 2013. “CHECk: A Tool to Inform and Encourage Ethical Practice in Participatory Design with Children.” In CHI’13 Extended Abstracts on Human Factors in Computing Systems, 187–192.
  • Reiss, M. 2003. “How We Reach Ethical Conclusions.” In Key Issues in Bioethics, edited by R. Levinson, and M. Reiss, 14–23. London: Routledge. https://doi.org/10.4324/9780203464533-2
  • Reiss, M. 2006. “Teacher Education and the New Biology.” Teaching Education 17: 121–131. https://doi.org/10.1080/10476210600680325.
  • Reiss, M. 2010. “Ethical Thinking.” In Ethics in the Science and Technology Classroom: A New Approach to Teaching and Learning, edited by A. Jones, A. McKim, and M. Reiss, 7–18. Rotterdam: Sense Publishers.
  • Rest, J. R. 1982. A Psychologist Looks at the Teaching of Ethics. Hastings Center Report, 29–36. https://doi.org/10.2307/3560621.
  • Rolston III, H. 2017. “Is There an Ecological Ethic?” In Ethics in Planning, edited by M. Wachs, 299–317. New York: Routledge. https://doi.org/10.4324/9781315239897-2.
  • Rousell, D., A. Cutter-Mackenzie, and J. Foster. 2017. “Children of an Earth to Come: Speculative Fiction, Geophilosophy and Climate Change Education Research.” Educational Studies 53 (6): 654–669. https://doi.org/10.1080/00131946.2017.1369086.
  • Rubegni, E., L. Malinverni, and J. Yip. 2022, June. “Don’t Let the Robots Walk Our Dogs, But It’s Ok For Them To Do Our Homework: Children’s Perceptions, Fears, and Hopes in Social Robots.” Interaction Design and Children, 352–361. https://doi.org/10.1145/3501712.3529726.
  • Sadler, T., and D. Zeidler. 2004. “The Morality of Socio-Scientific Issues: Construal and Resolution of Genetic Engineering Dilemmas.” Science Education 88: 4–27.
  • Saltz, J., M. Skirpan, C. Fiesler, M. Gorelick, T. Yeh, R. Heckman, and N. Beard. 2019. “Integrating Ethics Within Machine Learning Courses.” Transactions on Computing Education, 1–26. https://doi.org/10.1145/3341164.
  • Sartori, L., and G. Bocca. 2023. “Minding the Gap (s): Public Perceptions of AI and Socio-Technical Imaginaries.” AI & Society 38 (2): 443–458. https://doi.org/10.1007/s00146-022-01422-1.
  • Saunders, K. J., and L. J. Rennie. 2013. “A Pedagogical Model for Ethical Inquiry into Socioscientific Issues in Science.” Research in Science Education 43: 253–274. https://doi.org/10.1007/s11165-011-9248-z.
  • Schaper, M. M., L. Malinverni, and C. Valero. 2020, October. “Robot Presidents: Who Should Rule the World? Teaching Critical Thinking in AI Through Reflections Upon Food Traditions.” In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, 1–4.
  • Schaper, M. M., R. C. Smith, M. A. Tamashiro, M. Van Mechelen, M. S. Lunding, K. E. K. Bilstrup, and O. S. Iversen. 2022. “Computational Empowerment in Practice: Scaffolding Teenagers’ Learning About Emerging Technologies and Their Ethical and Societal Impact.” International Journal of Child-Computer Interaction, 100537. https://doi.org/10.1016/j.ijcci.2022.100537.
  • Scott, K. A., K. M. Sheridan, and K. Clark. 2015. “Culturally Responsive Computing: A Theory Revisited.” Learning, Media and Technology 40 (4): 412–436. https://doi.org/10.1080/17439884.2014.924966.
  • Sharma, S., H. Hartikainen, L. Ventä-Olkkonen, G. Eden, N. Iivari, E. Kinnunen, and R. F. Arana. 2022. “In Pursuit of Inclusive and Diverse Digital Futures: Exploring the Potential of Design Fiction in Education of Children.” Interaction Design and Architecture (s) 51: 219–248. https://doi.org/10.55612/s-5002-051-010.
  • Silvis, D., J. Clarke-Midura, J. F. Shumway, V. R. Lee, and S. Mullen. 2022. “Children Caring for Robots: Expanding Computational Thinking Frameworks to Include a Technological Ethic of Care.” International Journal of Child-Computer Interaction 33: 100491. https://doi.org/10.1016/j.ijcci.2022.100491.
  • Skinner, Z., S. Brown, and G. Walsh. 2020, April. “Children of Color’s Perceptions of Fairness in AI: An Exploration of Equitable and Inclusive Co-Design.” In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1–8. https://doi.org/10.1145/3334480.3382901.
  • Skirpan, M., N. Beard, S. Bhaduri, C. Fiesler, and T. Yeh. 2018. “Ethics Education in Context: A Case Study of Novel Ethics Activities for the CS Classroom.” Proceedings of the 49th ACM Technical Symposium on Computer Science Education, 940–945. https://doi.org/10.1145/3159450.3159573.
  • Spiel, K., E. Brulé, C. Frauenberger, G. Bailley, and G. Fitzpatrick. 2020. “In the Details: The Micro-Ethics of Negotiations and In-Situ Judgements in Participatory Design with Marginalised Children.” CoDesign 16 (1): 45–65. https://doi.org/10.1080/15710882.2020.1722174.
  • Spiel, K., E. Brulé, C. Frauenberger, G. Bailly, and G. Fitzpatrick. 2018, August. “Micro-Ethics for Participatory Design with Marginalised Children.” In Proceedings of the 15th Participatory Design Conference: Full Papers-Volume 1, 1–12. https://doi.org/10.1145/3210586.3210603.
  • Steele, A. 2016. “Troubling STEM: Making a Case for an Ethics/STEM Partnership.” Journal of Science Teacher Education 27: 357–371. https://doi.org/10.1007/s10972-016-9463-6.
  • Tamashiro, M., M. Van Mechelen, M. M. Schaper, and O. Sejer Iversen. 2021, June. “Introducing Teenagers to Machine Learning Through Design Fiction: An Exploratory Case Study.” In Proceedings of the 20th Annual ACM Interaction Design and Children Conference, 471–475. https://doi.org/10.1145/3459990.3465193.
  • Tironi, M., M. Chilet, C. U. Marín, and P. Hermansen, eds. 2023. Design For More-Than-Human Futures: Towards Post-Anthropocentric Worlding. New York: Taylor & Francis.
  • Uehara, T. 2020. “Can Young Generations Recognize Marine Plastic Waste as a Systemic Issue?” Sustainability 12 (7): 2586. https://doi.org/10.3390/su12072586.
  • UNICEF. 2021a. Adolescent Perspectives on Artificial Intelligence. Accessed 21 March 2024. https://www.unicef.org/globalinsight/sites/unicef.org.globalinsight/files/2021-02/UNICEF_AI_AdolescentPerspectives_20210222.pdf.
  • UNICEF. 2021b. Policy Guidance on AI for Children. Version 2.0. Accessed 21 March 2024. https://www.unicef.org/globalinsight/reports/policy-guidance-ai-children.
  • UNICEF. 2022. Our Future Pledge: An Agenda for Futures by Youth. Accessed 21 March 2024. https://www.unicef.org/globalinsight/media/3016/file/%20UNICEF-Innocenti-YFF-Our-Future-Pledge-toolkit-2023.pdf.
  • United Nations. 1989. The United Nations Convention on the Rights of the Child. Accessed 21 March 2024. https://www.unicef.org.uk/wp-content/uploads/2016/08/unicef-convention-rights-child-uncrc.pdf.
  • Vakil, S. 2018. “Ethics, Identity, and Political Vision: Toward a Justice-Centered Approach to Equity in Computer Science Education.” Harvard Educational Review 88 (1): 26–52. https://doi.org/10.17763/1943-5045-88.1.26.
  • Vakil, S., and M. McKinney de Royston. 2022. “Youth as Philosophers of Technology.” Mind, Culture, and Activity, 1–20. https://doi.org/10.1080/10749039.2022.2066134.
  • Van Mechelen, M., G. E. Baykal, C. Dindler, E. Eriksson, and O. S. Iversen. 2020, June. “18 Years of Ethics in Child-Computer Interaction Research: A Systematic Literature Review.” In Proceedings of the Interaction Design and Children Conference, 161–183.
  • Ventä-Olkkonen, L., N. Iivari, S. Sharma, T. Molin-Juustila, K. Kuutti, N. Juustila-Cevirel, and J. Holappa. 2021, June. “Nowhere to Now-Here: Empowering Children to Reimagine Bully Prevention at Schools Using Critical Design Fiction: Exploring the Potential of Participatory, Empowering Design Fiction in Collaboration with Children.” In Designing Interactive Systems Conference 2021, 734–748. https://doi.org/10.1145/3461778.3462044.
  • Venturini, T., and R. Rogers. 2019. “API-Based Research or How Can Digital Sociology and Journalism Studies Learn from the Cambridge Analytica Data Breach.” Digital Journalism 7 (4): 532–540. https://doi.org/10.1080/21670811.2019.1591927.
  • Wargo, J. M., and J. Alvarado. 2020. “Making as Worlding: Young Children Composing Change Through Speculative Design.” Literacy 54 (2): 13–21. https://doi.org/10.1111/lit.12209.
  • Waycott, J., C. Munteanu, H. Davis, A. Thieme, S. Branham, W. Moncur, and J. Vines. 2017, May. “Ethical Encounters in HCI: Implications for Research in Sensitive Settings.” In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 518–525.
  • Williams, R., S. Ali, N. Devasia, D. DiPaola, J. Hong, S. P. Kaputsos, and C. Breazeal. 2022. “AI+ Ethics Curricula for Middle School Youth: Lessons Learned from Three Project-Based Curricula.” International Journal of Artificial Intelligence in Education, 1–59. https://doi.org/10.1007/s40593-022-00298-y.
  • Wolff, A., A. Pässilä, A. Knutas, T. Vainio, J. Lautala, and L. Kantola. 2021. “The Importance of Creative Practices in Designing More-Than-Human Cities.” In Handbook of Smart Cities, 1643–1664. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-69698-6_74
  • Yarosh, S., I. Radu, S. Hunter, and E. Rosenbaum. 2011, June. “Examining Values: An Analysis of Nine Years of IDC Research.” In Proceedings of the 10th International Conference on Interaction Design and Children, 136–144. https://doi.org/10.1145/1999030.1999046.
  • Yelavich, S., and B. Adams, eds. 2014. Design as Future-Making. Bloomsbury Publishing. https://doi.org/10.5040/9781474293907
  • York, E., and S. N. Conley. 2020. “Creative Anticipatory Ethical Reasoning with Scenario Analysis and Design Fiction.” Science and Engineering Ethics 26: 2985–3016. https://doi.org/10.1007/s11948-020-00253-x.
  • Zalasiewicz, J., M. Williams, A. Smith, T. L. Barry, A. L. Coe, P. R. Bown, and P. Stone. 2008. “Are We Now Living in the Anthropocene?” Gsa Today 18 (2): 4. https://doi.org/10.1130/GSAT01802A.1.
  • Zhang, H., I. Lee, S. Ali, D. DiPaola, Y. Cheng, and C. Breazeal. 2022. “Integrating Ethics and Career Futures with Technical Learning to Promote AI Literacy for Middle School Students: An Exploratory Study.” International Journal of Artificial Intelligence in Education, 1–35. https://doi.org/10.1007/s40593-022-00293-3.