80
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Transparency as the defining feature for developing risk assessment AI technology for border control

&

ABSTRACT

AI systems to be used in migration, asylum and border control management are qualified as high-risk AI under the current draft AI Act in the EU. The draft Act introduces strict requirements for the development and use of these systems, including the requirement of high-quality data to be used for training algorithms in order to mitigate the risks to fundamental rights and safety. Based on research conducted in the framework of the H2020 CRiTERIA project,Footnote1 this study analyses from a legal doctrinal approach the compliance of open-source data from Social Media platforms with the requirement of high-quality data and the present challenges for the transparency requirement. Following all the requirements introduced for mitigating the risks posed to fundamental rights and safety from high-risk AI, the compliance of open-source data from Social Media with the high-quality data requirement is put into doubt. As transparency is found to be the defining line between high-risk and unacceptable-risk AI, it is argued that the use of open data from Social Media for risk assessment border control AI systems might present an unacceptable risk for the protection of fundamental rights, if proper safeguards are not followed.

1. Introduction

A large amount of data without any privacy filters or, said differently, open-source data is available on Social Media platforms. The availability, as well as the easiness to access these data, makes them attractive to be used for training algorithms and for developing new Artificial Intelligence (AI) solutions. While clear benefits are identified for the use of risk assessment AI for border control (Dumbrava Citation2021, 4), legal and ethical concerns, especially linked to the protection of the fundamental rights to privacy and data protection as well as human dignity, arise when open-source data from Social Media are used for training AI systems.

In the current draft of the AI Act (Citation2021) proposed by the European Commission (Art 6),Footnote2 the high-risk qualification of AI refers to those systems that present potential risks for individuals for the enjoyment of their fundamental rights and for safety. AI systems to be used in the area of migration, asylum and border control fall under this category. The reason for this qualification is that AI systems for border control affect people who are often in a particularly vulnerable position and who are dependent on the outcome of the actions of the border authorities (draft AI Act (Citation2021), Recital 39). The accuracy, non-discriminatory nature and transparency of the AI systems used in this context are particularly important to guarantee the respect of the fundamental rights of the affected individuals, for example, the rights to non-discrimination, the protection of private life and personal data, etc. These identified challenges have been at the bases of a vivid debate about qualifying AI used for border control and migration as presenting unacceptable risks and thus, banning its use.Footnote3 The line between high-risk AI and unacceptable risk AI is a very fine one, and transparency compliance plays a crucial role for establishing this line.

In the European Union (EU), the use of open data from the public sector for training AI is currently encouraged and supported in the framework of the Digital Single Market Strategy (Citation2015). Closely linked to this, the Data Governance Act aims to further facilitate the reuse of public sector data as well as to facilitate and encourage data sharing for individuals as well as for private businesses, based on the data altruism concept (Data Governance Act Citation2020, Art 2(16)). In the area of border control, monitoring, analysis and open-source intelligence based on internet activities, including Social Media platforms, is legally prescribed under the services that the European Border Surveillance System (EUROSUR Citation2019, Art 28(2)(h)) offers. As a result, there are current initiatives (Frontex Citation2020; and European Commission Citation2023) to assess the potential inclusion of these data in border control systems for closing the gaps that are often created by the exclusive use of official databases (COM Citation2016, 12). One can notice, though, that AI systems are not explicitly mentioned in the EUROSUR regulation, and open data from Social Media are not mentioned as such in the draft AI Act. As a result, practices of employing open-source data from Social Media for training algorithms might flourish following the legal maxim: ‘Everything which is not prohibited is allowed’. Thus, a legal and ethical analysis of the use of open-source data for training risk assessment AI for border control is needed to assess the lawfulness of such practices.

This study analyses the legal and ethical challenges that the use of open data from Social Media for training AI systems for border control creates for the protection of the rights of individuals as well as analyses the safeguards for introducing trustworthy AI. The findings are based, but not limited, on research conducted in the framework of the H2020 CRiTERIA project. The scope of this project is to create a novel, comprehensive as well as feasible and human-rights sensitive risk and vulnerability analysis framework that will assist border agencies in better understanding contemporary security threats, connected but not restricted, to migration and to predict and detect risks from both controlled and uncontrolled border crossings. In this framework, the use of open-source data from Social Media is envisaged to fill in any information gaps and to be used for the development of innovative and effective information technology and AI methods in support of the approach for risk, vulnerability and threat analysis.

After this introduction, Section 2 identifies and analyses the legal and ethical challenges that the use of open-source data from Social Media presents. Section 3 focuses on the transparency requirement for AI. The draft AI Act is analysed with an emphasis on the transparency requirements for high-risk AI. Section 4 distinguishes between risk assessment AI systems for border control that directly affect natural persons and the ones that do not. It reflects upon the transparency requirements and open data from Social Media and projects the findings into both types of systems, the ones that directly affect natural persons and the ones that do not. The concluding remarks are presented in the final Section 5.

2. Legal and ethical challenges created by the use of open-source data from Social Media

This section focuses on the legal and ethical challenges that derive from the use of open-source data from Social Media for training AI technology to be used for risk assessment purposes. For understanding the legal challenges, the emphasis is put on the EU data protection framework as well as on studies conducted by various EU bodies with regards to the use of these data for border-crossing purposes. However, since, for the proper protection of natural persons at times of quick technology innovation, the law is only the lowest standard, attention will also be paid to ethical concerns that often escape legal protection. Identifying these challenges is very important since they enable the protection of the fundamental rights of natural persons even beyond the legal requirements.

2.1. Legal challenges

The use of open-source data from social media for risk analyses in border control has been considered for a long time by various EU institutions, as, for example, in the case of Frontex.Footnote4 Warning about the use of this data and data protection issues have been identified by the European Data Protection Supervisor that qualifies social media users monitoring as a personal data processing activity that creates a high risk for individuals’ rights and freedoms (EDPS Citation2019, 3). The main concern is that the use of these data is done beyond any reasonable expectations from data subjects. While individuals use Social Media services for their own purposes, as it is communication, the further use of the data goes beyond their initial purpose and context. Moreover, such processing of data is done in ways that the individual could not reasonably anticipate when making the data available (Edwards and Urquhart Citation2016). Especially, if the data are used for profiling data subjects, this may imply interference of interests or other characteristics in which the data subject had not actively disclosed, thereby undermining the data subject’s ability to exercise control over his or her personal sphere.

The potential legal issues that the EDPS (Citation2019) identified from the use of open-source data from Social Media in a proposal for their use from the European Union Agency for Asylum are:

  • - Potential high risks to the fundamental rights of data subjects and groups concerned;

  • - Infringement of the principles of purpose limitation and data minimisation (GDPR (Citation2016), Art 5(1)(b)–(c));

  • - Infringement of the principles of fairness and transparency (GDPR, Art 5(1)(a));

  • - Risks in processing special categories of personal data with the result of discriminating data subjects (GDPR, Art 9(1));

  • - Difficulty in complying with data subjects’ rights, especially the right to information, access to data and to object to data processing activities (GDPR, Chapter 3);

  • - Challenges to data quality since what it is posted on Social Media is not necessarily true or accurate (GDPR, Art 5(1)(d));

  • - Security concerns related to potential misuse of the data collected (GDPR, Arts 32–34).

Thus, any uses of open-source data from Social Media for profiling purposes must be assessed and must follow strong safeguards for the protection of data subjects’ rights and freedoms. Furthermore, these uses must strictly comply with the applicable data protection framework.

Legal concerns have also been identified with regards to the use of open-source aggregate data from Social Media. These have the potential of creating a high risk of using unreliable information, and as a result, any profiling based on inaccurate data would compromise the effectiveness of border control due to incorrect correlations and data distortions (FRA Citation2018, 119). There are different causes for potential bias results when aggregated data are used, including issues with the training datasets (skewed, incomplete, outdated or disproportionate data) and the algorithms (poorly designed, reflecting biased norms and prejudices, poorly implemented). According to a report from eu-LISA, the problem can be addressed either by using representative data sets for training algorithms or by creating synthetic data sets with characteristics that are representative of the population (eu-LISA Citation2020). In contrast, the former solution could still pose data protection risks, the latter could lead to higher error rates associated with the use of synthetic data (Dumbrava Citation2021, 27).

The above discussion shows that there are potential legal concerns for the use of open-source data from Social Media, even at an aggregated form. The data quality is one of these concerns that have implications for the quality of the results of any analyses, even after fulfilling all the data protection standards. However, it needs to be pointed out also that these concerns are identified with regards to use of data for individual profiling or for the projection of the results of data analysis in individual cases. General data analyses that do not affect natural persons are not included in the discussion of the legal concerns thus far.

2.2 Ethical challenges

Issues such as potential discrimination, bias evaluations, potential errors or misuses of the system and data may come at a high price not only from a legal perspective but also from an ethical one (Centre for the study of Global Ethics Citation2010). The legal challenges identified in the previous section project themselves often also into ethical concerns. The latter might go even beyond the legal regulation and thus be present even if all the processing activity is done in compliance with the existing legal framework (Berry Citation2004).

Many ethical concerns are already codified. The Ethics Guidelines for Trustworthy Artificial Intelligence, for example, emphasise the need for transparency and accountability, also present in article 5(1)(a) and (2) GDPR, as well as the requirements for consent, information, diversity, and non-discrimination included in the GDPR provisions (AI HLEG Citation2019). Thus, observing the legal framework would already serve to comply with these challenges. However, not all ethical concerns are codified. Considering all ethical challenges, codified and not codified contributes to a better protection of the rights of natural persons, to a better acceptance of the use of new technological methods in the society, as well as it assists in pacifying relations between various actors in the process.

Thus, the ethical challenges that derive from the use of open-source data from Social Media can be linked to legal challenges or can be distinct from those. For example, the accuracy and reliability of the data, together with the change in the purpose of processing and the reliance on adequate, relevant and proportionate personal data are closely linked to the data protection legal framework. Other challenges, like self-selecting nature of Social Media users; inequalities in access to Social Media platforms; unclear boundaries between public and private spaces for Social Media users; difficulty in obtaining meaning from heterogeneous data for analysts; and difficulty in ensuring anonymity and preserving privacy of natural persons using Social Media are directly linked to the nature of the data and reinforce the data quality concern. Furthermore, one could also identify challenges deriving from the vulnerability of the natural persons crossing a border. While not everyone crossing a border would qualify as vulnerable, many are and risks can be identified for potential misuse and harm caused by the findings of the data analyses; potential harm caused by profiling; and adverse effects, like pushing potential migrants into more dangerous migration routes (Bloemraad and Menjívar Citation2022).

The identified challenges can be projected as falling under the following meta categories of ethical concerns:

  1. Protection of the autonomy and dignity of data subjects – individuals are considered on the basis of their data and their behaviour in social media and not on the basis of their actions in the physical world. Understanding and qualifying information might require understanding the context and mental framework in which certain data were published, and it is difficult to reflect on any changes of mind or ex post data deletions or modifications. Furthermore, untrue information or fake data might influence the results of research.

  2. Discrimination – possibility for profiling and for personal and sensitive data identification might potentially create situations of discrimination and biased data processing. Stereotypes may be reproduced even unintentionally. This might be augmented by the fact that users of social media platforms are self-selected and inequalities exist in accessing the services. The bias effect might be even the result of the way data are scraped from Social Media, thus might be inherent of another independent existing system.

  3. Chilling effects in society – learning about the ways Social Media data are used, might have as a consequence that individuals use the platforms less, are not expressing their opinions freely or are using coded language. They might even be inclined to use different, and often, more dangerous migration routes (Dimitriadi et al. Citation2021).

2.3 Reflection on the identified legal and ethical challenges with regards to risk assessment systems

The legal and ethical challenges identified make it difficult to justify the necessity and proportionality of the use of open-source data from Social Media for training algorithms and for building any IT or AI systems to be used in individual cases of border control. While the data might fill in informational gaps, legal and ethical concerns about the reliability of the data need to be taken into account and very robust data protection standards need to be introduced in order to comply with the data protection principles of fairness and transparency, with the purpose limitation and data minimisation, with data security, data protection by design and default and with all the data subject rights.

However, one can also argue that even though there are concerns about the use of open-source data from Social Media, at the aggregated level the legal and ethical challenges are less prominent in a risk assessment tool the results of which do not target directly natural persons (Mahoney Citation2022). The CRiTERIA tool, for example, takes as a starting point existing methodologies used for border control, such as CIRAM,Footnote5 which are recognised to present limitations in terms of scope, data sources, methodology and overall use. To overcome these limitations, the CRiTERIA tool proposes a multi-perspective risk and vulnerability analysis methodology with multi-source, multi-lingual analysis for serving the complex indicators of the threat and risk assessment methodology as well as for making the analysis accessible in a verifiable and understandable way. This proposed tool makes use of open-source data from Social Media to close the information gaps, but it is not used for data subject profiling or for projecting results of data analyses to individual cases. The proposed tool is designed with the scope of understanding general trends and risks at EU borders and to serve border authorities towards a complete risk assessment analyses. As such, the tool aims to be one of the various indicators to be considered during a risk and threat analyses process. As such, it provides some form of strategic intelligence instead of an operational one.

Thus, while the risks for natural persons would be minimised if they are not directly affected, the reliability of the data used for training the system still remains a point of concern. Unreliable data, at personal or aggregated level, will have as consequence unreliable results and, even if not directly, they might indirectly affect data subjects crossing a border. Furthermore, concerns about data security would continue to be present together with the difficulty of ensuring that the data minimisation principle is complied with. Thus, despite the lack of direct data subject concerns, compliance with the transparency requirements remains crucial not only from data protection but also from a technology design perspective. Since from an AI perspective non-compliance with the transparency requirements would qualify the risk to data subjects as unacceptable, the understanding and regulation of transparency in the AI draft Act is discussed in the following section.

3. Transparency and AI

AI is not yet regulated at the European level. In April 2021, a draft regulation on AI was proposed by the European Commission. In December 2023 a political agreement was reached after a trialogue procedure between the European Parliament, the Council and the European Commission. With the future introduction of this legislation, the EU aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules, which aim to provide the Union with a leading role in setting the global gold standard for AI.

In Annexe III of the draft Act, AI systems to be used in migration, asylum and border control management are considered as systems of high-risk, when they are intended to:

  1. be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;

  2. be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a member state;

  3. be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;

  4. to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.

There are two important considerations linked to the above-listed situations. Firstly, all situations refer to cases in which the use of AI technology for border control directly affects a natural person – or, said differently, in operational intelligence. Thus, a risk assessment AI tool that is not intended for use in individual cases or, said differently, a system used for strategic intelligence, is not listed as high-risk AI.

Secondly, being qualified as a high-risk does not mean that an AI system cannot be developed or used in the future. It only means that the risks that the use of these tools might present are high for the safety and the protection of fundamental rights of natural persons. Thus, a number of checks and safeguards need to be in place as well as specific requirements enshrined in the articles 8–29 of the current AI draft Act. Although these provisions are not yet law at the time of writing, it is important to already reflect upon them. This section presents a reflection on the general and specific requirements that high-risk AI systems need to comply with and it emphasises on the transparency requirement.

The proposed AI Act is based on article 114 TFEU as its legal basis. Thus, keeping in mind the economic nature of the EU as a regional organisation, one can clearly identify that the focus of the legislation is on the AI product itself and its free movement in the internal market. Because of the economic-driven focus, the draft has an emphasis on the behaviour of the economic operators on the market (Busuioc et al. Citation2023). The fundamental rights’ concerns come as a secondary aim of the legislation but not high enough to justify the use of a more specific legal basis. This conclusion can be identified through the general requirements and safeguards for the design of AI systems. The main focus is on the risk management, on the use of data and on data governance, the technical documentation, record-keeping, human oversight, accuracy, robustness and cybersecurity, without undermining the transparency principle. Furthermore, from a users’ perspective, the draft AI Act focuses on the quality of the management system, the technical documentation, conformity assessment, the automatically generated logs, corrective actions, duty of information, as well as cooperation with competent authorities.

Thus, for safeguarding the rights of natural persons, apart from the understanding that the fulfilment of the requirements for transparency and explainability of the AI system are crucial for the use of high-risk AI, it is important to also assess these requirements in light of other legal sources. In the GDPR, the transparency principle (Art 5(1)(a)) is closely linked to the principle of accountability (Art 5(2)) and it even helps for the explainability of a data processing activity (Varošanec Citation2022). While arts 13–15 GDPR refer to information and explanation duties of controllers of personal data, the law is mute on the level of detail needed for fulfilling these duties. In the draft AI Act, the transparency requirement seems more developed than in the GDPR, even though it is not yet fully understood what this requirement entails. Article 10(2–4) of the draft AI Act seems to link transparency to the quality of the data used for training the AI tools, for their validation and testing. The data must be relevant, representative, free of errors and relevant. These obligations are imposed on AI providers in art 13 of the draft AI Act.

These legal requirements are very relevant as transparency is the necessary condition for trust and it is very closely linked to this concept (Larsson and Heintz Citation2020). Transparency is also seen as a pre-condition for accountability and as such, transparency in AI is not just understood as visibility but it also refers to the explainaibility of a process (Busuioc et al. Citation2023). While the use of AI for low-risk decisions carries little, if any, moral hazard and might not require a detailed explanation, the situation becomes problematic when decisions that directly affect natural persons are taken (Enschenbach Citation2021). Fulfilment of the transparency requirement in these cases is very important since failure would require the AI tool to qualify as falling under the category of unacceptable risk, and thus not be used.

4. Open-source data from Social Media and transparent AI for border control

A 2020 study done for the European Commission (Deloitte Citation2020) identified five clusters of opportunities for the use of AI in border control, migration and security. These are chatbots and virtual assistants; risk assessment and application triaging in the context of the visa process; knowledge management tools; policy insight and analytics tools; and computer vision applications ‘to gain insights from image processing of individuals (faces, fingerprints, etc.) and objects’.

Furthermore, Frontex (the European Border and Coast Guard Agency) has argued for the use of more open data from Social Media in the framework of democratisation of AI (Frontex Citation2020, 26). Currently, in a recent draft Regulation from the European Commission (European Commission Citation2023), it is required to increase the number of open-source and Social Media monitoring specialists to support Member States’ investigations and preventive measures concerning organised migrant smuggling and trafficking in human beings. In relation to AI, this encompasses the proliferation of open-source, low-cost technological solutions, enabling easier access for a greater number and variety of stakeholders, including border security agencies. However, when considering the use of open data from Social Media, it is important to distinguish between AI systems that affect natural persons and AI systems that do not directly affect natural persons. This distinction is important for being able to weigh the risks that the use of these systems would present.

From Annexe III of the current draft AI Act, it is clear that the high-risk qualification is reserved for these systems that directly affect the situation of a natural person. Thus, AI risk assessment systems to be used for regular and irregular border crossing for understanding general risks and threats, as in the case of the CRiTERIA system, would escape such qualification.

For high-risk systems, the compliance with the transparency requirement and especially with the data quality standard is crucial for justifying the lawful use of such systems. As clearly stated in recital 44 of the draft AI Act, high data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely, and it does not become the source of any discrimination that is prohibited by Union law. The legal and ethical challenges of open-source data from Social Media discussed in Section 2 have shown that it is impossible to ensure that these data are free from errors and complete in order to ensure a proper training of the algorithms. From an accountability consideration, the nature of the data and of the Social Media environment makes it impossible to prove that the databases used for training the systems are unbiased by nature or that will provide unbiased output. Thus, it is impossible to prove the severity of risks linked to biased data and presented to the safety and security of natural persons. A proper reliance on the legal rules and ethical concerns would qualify such AI tools as presenting unacceptable risks.

The situation is different when the risk assessment system does not affect natural persons directly as in the example of CRiTERIA. Such a system escapes the high-risk qualification and it is thus presenting low-risk concerns for the safety of natural persons. Thus, the use of open data from Social Media would be justified for these systems. However, it must be ensured that the proper security measures are in place and that the systems as well as the created databases are not used for any other purposes than the ones that justify their use. For example, a system and database created for a general risk assessment (strategic intelligence), is not used for profiling natural persons (operational intelligence).

5. Conclusion and reflections

The use of open-source data from Social Media for training AI technology is encouraged not only because these data are easily accessible and available but also because of the silence of the current legal framework on the use of these data. The AI democratisation argument encourages their use in order to make AI more accessible and to provide low-cost solutions. In this paper, it was argued that the use of open-source data from Social Media for training AI for border control needs to be evaluated in light of the data quality and it presents problems in the framework of the transparency principle.

From a legal perspective, the use of open-source data from Social Media creates potential risks to the fundamental right to data protection of data subjects because of the potential infringement of the principles of purpose limitation and data minimisation as well as of the principles of fairness and transparency. Furthermore, it creates situations of potential discrimination of data subjects based on processing special categories of personal data, difficulties in complying with their data subjects’ rights, and challenges for ensuring the quality of data and the security of the databases. From an ethical perspective, three main categories of potential concerns were identified: (i) difficulties in protecting the autonomy and dignity of data subjects; (ii) potential discrimination based on sensitive information, as well as (iii) chilling effects on society.

While the development of AI technology is advancing with tremendous speed, it is crucial for the protection of the rights of natural persons to ensure that the technology does not interfere unjustly with their fundamental rights. The draft AI Act introduces a qualification of AI technology on the basis of the risks that the use of the technology presents for the enjoyment of fundamental rights and for the safety of natural persons. Transparency, closely linked to the explainability of the technology, serves as the dividing line between high-risk and unacceptable-risk technology.

AI systems to be used for border control are qualified as high-risk in the draft AI Act. A close look to the letter of the law, however, shows that this qualification embraces only those systems that affect directly natural persons. AI used for general risk assessment systems that do not affect natural persons escapes such qualification.

A legal and ethical analysis of open-source data from Social Media has shown that they do not pass the data quality requirement. Thus, these data cannot be used for training high-risk AI. However, the data can be used for training minimal or limited risk AI. General risk assessment systems would not fall under the high-risk qualification making it possible for such systems to have less strict requirements with regards to the quality of the data used. However, these systems must ensure that the data quality would not undermine the overall results as well as they need to apply very strict data protection standards. Since it currently appears that the AI regulation in the EU will have at the basis of its existence the aim of fulfilling economic interests in the internal market, it is essential to look for safeguards for the protection of the rights of natural persons and their safety beyond the letter of this law. Only in this way, one could benefit from the advantages of technology while ensuring the protection of the rights of natural persons.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 H2020 CRiTERIA project (Comprehensive data-driven Risk and Threat Assessment Methods for the Early and Reliable Identification, Validation and Analysis of migration-related risks) has received funding from the European Union’s Horizon 2020 research and innovation action program under grant agreement No 101021866.

2 At the time of writing and of the revision of this article (January 2024), the official draft of the AI Act available does not reflect on the trialogue agreement reached between the Commission, The European Parliament and the Council on the 9th of December 2023. This agreement does not change the core of our discussion. The main outcomes of the political deal can be found here: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai accessed 30 January 2024.

3 See the open letter by 195 organisations and individuals, led by EDRi, Access Now, Refugee Law Lab and PICUM ‘Civil society calls for the EU AI Act to better protect people on the move’, available at: https://edri.org/our-work/civil-society-calls-for-the-eu-ai-act-to-better-protect-people-on-the-move/ accessed 14 June 2023.

4 In 2019, Frontex opened a call for tender on provision of services related to social media analysis and monitoring for the purpose of improved risk analysis regarding future irregular migratory movements impacting external borders of the European Union (EU) and Schengen Associated Countries (SAC) and to support of planning, conduct and evaluation of joint operations coordinated by Frontex, as described in terms of reference (Annex II). The call was cancelled quickly after the publication. Please check: https://etendering.ted.europa.eu/cft/cft-display.html?cftId=5471 accessed 14 June 2023.

5 A short overview of CIRAM (Common Integrated Risk Analysis Model) can be found online at: https://frontex.europa.eu/we-know/situational-awareness-and-monitoring/ciram/ accessed 14 June 2023.

References