Automating Decision-making in Migration Policy: A Navigation Guide
Summary
Algorithmic-driven or automated decision-making models (ADM) and programs are increasingly used by public administrations to assist human decision-making processes in public policy—including migration and refugee policy. These systems are often presented as a neutral, technological fix to make policy and systems more efficient. However, migration policymakers and stakeholders often do not understand exactly how these systems operate. As a result, the implications of adopting ADM technology are still unclear, and sometimes not considered. In fact, automated decision-making systems are never neutral, nor is their employment inevitable. To make sense of their function and decide whether or how to use them in migration policy will require consideration of the specific context in which ADM systems are being employed.
Three concrete use cases at core nodes of migration policy in which automated decision-making is already either being developed or tested are examined: visa application processes, placement matching to improve integration outcomes, and forecasting models to assist for planning and preparedness related to human mobility or displacement. All cases raise the same categories of questions: from the data employed, to the motivation behind using a given system, to the action triggered by models. The nuances of each case demonstrate why it is crucial to understand these systems within a bigger socio-technological context and provide categories and questions that can help policymakers understand the most important implications of any new system, including both technical consideration (related to accuracy, data questions, or bias) as well as contextual questions (what are we optimizing for?).
Stakeholders working in the migration and refugee policy space must make more direct links to current discussions surrounding governance, regulation of AI, and digital rights more broadly. We suggest some first points of entry toward this goal. Specifically, for next steps stakeholders should:
- Bridge migration policy with developments in digital rights and tech regulation
- Adapt emerging policy tools on ADM to migration space
- Create new spaces for exchange between migration policymakers, tech regulators, technologists, and civil society
- Include discussion on the use of ADM systems in international migration fora
- Increase the number of technologists or bilinguals working in migration policy
- Link tech and migration policy to bigger questions of foreign policy and geopolitics
In order to be able to make well-founded decisions regarding the development and use of ADM systems in this area, and to do so more proactively, will require a more conscious, deliberate approach to all issues surrounding their deployment and implications. The employment of ADM systems also raises questions on how people will be able to be mobile across borders, and which technologies, systems, data collection, and surveillance we are willing to accept as individuals and societies for the price of human mobility. This is the context in which decisions are being automated in the migration space.
Introduction
From credit scores, or parole decisions, to predictive policing, intelligent video surveillance, or recommendations about the series we watch or the music we stream, AI-powered automated decision-making (ADM) systems are already influencing our lives in ways mostly unseen. Fueled by the unprecedented mass collection of personal and public data for the purpose of establishing better prediction models of individual or group behavior, these “data-driven algorithmically controlled systems”1 are increasingly employed in more and more areas of public policy. Governments, humanitarian actors, the tech sector, academia, and NGOs are also developing or piloting new systems in areas of migration or refugee policy and human mobility more broadly.
Recent applications range from the automated chatbot Kamu by the Finnish Immigration Service, to the use of automated dialect recognition to assist asylum procedures in the German Federal Office for Migration and Refugees, to the Infoshield algorithm aimed to identify organized human trafficking via online ads. There are also far more controversial uses, such as the 2018 “Extreme Vetting Initiative” announced by the U.S. Immigration and Customs Enforcement that included the plan to automatically flag visa applicants and recipients by scanning social media data as a proxy for whether they would be a “positively contributing member of society” and would “contribute to the national interests.”2 Recent years have also brought the vast proliferation of facial recognition technology, for example at airports, where it is used at automated border kiosks and at boarding gates, or as live facial recognition technology, say to screen against criminal watchlists. A number of recent studies commissioned by the European Commission and the European Parliament illustrate a heightened interest in implementing these systems in different areas of migration policy such as the “Feasibility study on a forecasting and early warning tool for migration based on artificial intelligence technology“ (February 2021), the “Use of Artificial Intelligence in Border Control, Migration and Security“ (May 2020), or “Artificial Intelligence at EU borders“ (July 2021) (though some tested initiatives date back earlier, such as the controversial3 research project iBorderCtrl which ran from 2016 to 2019).
Frequently presented as being neutral or a simple technological fix to make policy and systems more efficient, migration policymakers and stakeholders are still often unaware of how these systems operate. As a result, the implications for individuals as well as migration policy more broadly have either not been considered or are still unclear. These include issues of a technical nature related to efficiency, accuracy, or data bias. But importantly, implications regarding discrimination, procedural justice, accountability, and protection of basic human rights are also unexamined.
ADM systems are never neutral, nor is their employment inevitable. Making sense of their function and deciding on whether or how to use them in migration policy going forward requires consideration of the specific context in which ADM systems are being employed. What is true for AI systems overall holds true in this case, too, or as stated by Kate Crawford, to “understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide.”4 Decisions made in the migration policy space almost always have an elemental impact on an individual's life. Furthermore, migrants are often already in vulnerable situations and have very few alternatives or options for recourse, making issues of potential discrimination, informed consent, or procedural justice all the more salient. In addition, due to the security imperative of sovereign national governments, decisions about migration often happen far from public scrutiny, with little or no information made available to the public, and often involve some discretion by individual caseworkers or decision-makers. Finally, the employment of ADM systems also involves questions on how people will be able to be mobile across borders, and which technologies, systems, data collection, and surveillance we are willing to accept as individuals and societies for the price of human mobility. This is the context in which decisions in the migration space are being automated.
The employment of ADM systems also involves questions on how people will be able to be mobile across borders, and which technologies, systems, data collection and surveillance we are willing to accept as individuals and societies for the price of human mobility.
This paper takes a deeper look at three concrete use cases at core nodes of migration policy and where automated decision-making is already either being developed or tested: visa application processes; placement matching to improve integration outcomes; and forecasting models to assist for planning and preparedness related to human mobility or displacement. Using (risk) scoring, classification, or pattern recognition, the three areas illustrate the range of applications within the field, while each differ in terms of the degree or manner in which it assists human decision-making (rarely are ADM systems today fully automated). At the same time, all cases raise the same categories of questions: from the data employed, to the motivation behind using a given system, to the action their models trigger. The nuances of each case demonstrate why it is crucial to understand these systems within a bigger techno-social context and provide categories and questions that can help policymakers understand the most important implications of any new system, including both technical considerations (related to accuracy, data questions, or bias) as well as contextual questions (what are we optimizing for?). Stakeholders working in the migration and refugee policy space must make more direct links to current discussions surrounding governance, regulation of AI, and digital rights more broadly. We then suggest some first points of entry toward this goal. It will be necessary to bridge emerging developments in technology governance and regulation with the use of tech in different migration and refugee policy fields more holistically, and to do so quickly.
The emerging migration and human mobility system will include new technologies in some shape or form. To be able to make well-founded decisions regarding the development and use of ADM systems in this area, and in order to do so more proactively, we must develop a more conscious, deliberate approach to issues surrounding their deployment and the implications for individuals and societies.
About the Migration Strategy Group
The Migration Strategy Group on International Cooperation and Development (MSG) is an initiative of the German Marshall Fund of the United States, the Bertelsmann Foundation, and the Robert Bosch Stiftung. Initiated in 2013, the MSG brings together representatives from different German ministries and other relevant actors at regular intervals, to discuss current migration-related foreign and development policy issues. Since 2020 the MSG has been looking into how digitalization and technological development is changing migration management and policy. Starting in 2022, the MSG will expand on the topic of technology and migration and conduct international stakeholder exchanges and network building.
Understanding ADM systems as part of a socio-technological framework
Algorithmic-driven models and programs are increasingly used by public administrations to assist human decision-making processes in a number of public policy fields—including migration and refugee policy. Unlike the industrial automation of the past—think Ford assembly lines—current ADM systems differ in that they have a direct decision-making element (of varying degrees and kinds), and in that they are inseparable from the vast amount of digital data collected (their fuel). Furthermore, they are a moving target. They do not remain static but are explicitly designed to adapt and evolve. Depending on the use case, ADM systems can vary from those producing an almost fully automated decision, to those simply assisting a step in a decision-making process. So far, fully automated decisions are a rarity. There are more and less controversial use cases: chatbots that provide automated information on a website, for instance, are far less controversial than those ADM applications that impact a person’s freedom or access to credit.
At a very basic level, algorithmic decision-making or decision support systems generate predictions or detect patterns within a given set of data. Based on these predictions or patterns, outputs are provided which determine or influence decisions and actions. Those can be fully automated or can entail a human in some form in the final decision-making process. In most social use cases, a human will still be present to make a final decision—referred to as human-in-the-loop.
Technologically speaking, algorithmic decision-making systems can differ in their complexity, whether they use machine learning, and whether and where a human is involved in the decision-making process. If machine learning is employed, it can be either supervised or unsupervised, both of which draw on vast amounts of public (or sometimes private) data sets.5 With supervised machine learning, the aim is to predict outcomes for new data based on learning from a historical data set. The training of this data set is done by humans to correct wrong answers (output) in the training period. Ultimately, a validated training model is then used on new data and the output is what is used to assist human decision-making.6
Algorithmic decision-making or decision support systems generate predictions or detect patterns within a given set of data. These predictions or patterns inform or determine decisions and actions. Those can be fully automated or can entail a human in the final decision-making process.
One big challenge in ADM systems using machine learning are “black box models“ where even the designers of the model do not understand what variables are combined to arrive at certain insights or predictions. As a result, calls for interpretable models or explainable AI (XAI) are growing (“If you cannot explain it, you cannot use it”). There are many potential issues with training data sets in machine-learning models (for example, data poisoning)7—even in cases where the variables are “explainable.” One of the most important issues is unwanted bias—an issue that has increasingly raised public concern, for example, in terms of bias against women in algorithms used in recruitment processes or racial bias in facial-recognition technology.8
There are thus technical aspects that will determine the use—and evaluation—of any such model, but grasping their implications must necessarily involve an analysis of the social ecosystem in which they take place.9 Importantly, these technical issues (say black box models, or issues of data bias) and other issues of ADM systems that are beginning to surface, are not mere technological glitches to be fixed, but rather point to more fundamental questions related to how assumptions and value-frames about the world are then translated into choices about which data to use, and for which outcome a model is designed to optimize. Against this backdrop, it is helpful to view ADM systems as “a socio-technological framework”—a case Algorithm Watch has made in their recent Automating Society Reports.10
The entire socio-technological framework in which an ADM system is employed includes aspects such as knowing the motivation of why an institution or policymaker intends to use the system in the first place, which purpose it is supposed to serve (or problem it is supposed to solve11), who developed the model in the first place (whether it was a private sector company, a university, or the public sector), and through which procurement process it was acquired. It also includes important legal and ethical considerations such as the guardrails used to maintain ethical standards, whether the data set was checked for biases, and whether private data involved in the training was legally obtained and used. Finally, the processes and institutions surrounding the model are also part of the ecosystem: Are there built-in options for recourse for those data subjects/humans directly impacted by the ADM? Are there options for oversight, access controls, and monitoring? Where are humans involved in the ADM process and final decision? And finally, which action does a model trigger? All of these questions are important in the migration field and are illustrated in the models described below.
Use cases in migration policy
This section outlines three different use cases that showcase the range of areas in which ADM is already being tested and employed in migration policy. The cases involve machine-learning systems that can impact decisions quite directly (as in processing visa applications) or indirectly, acting as a node in a decision support system (as in many forecasting models). They are not meant as an exhaustive list, but rather a starting point for more systematically tracing and evaluating the different models and use cases being employed.
A. Processing visa applications
Visa applications for all forms of short- or long-term travel have multiplied in the past decades, creating a strain on human resources to process them in a timely manner. The number of air travelers is estimated to rise from 4.3 to 5.5 billion from 2018 to 2033.12 For the EU alone, a study estimates an increase to 887 million regular border crossings for the year 2025.13 Some governments, including Canada’s, have begun exploring and piloting the use of various technologies like AI and automation to more efficiently handle a growing caseload, while simultaneously navigating new challenges that come along with doing so.14 Theoretically, automating decisions could significantly speed up the process of certain visa cases, while also making decisions that were previously the sole discretion of an individual case officer more consistent or non-arbitrary. Technologically speaking, ADM systems in visa processes are being developed to create either “risk” scores or automated classifications into different categories (which in turn could be based on a risk score or other criteria, such as the level of complexity of an application). They are also being tested as a data analytics tool—including information from past visa applications to generate new insights or find new patterns, which in turn could influence visa or migration policy and practice.
Critics concerned with the use of an ADM system as part of visa processes have questioned the underlying premise of whether these systems would in fact make systems more effective or, for example, if higher “error” rates or discriminatory outcomes in fact would lead to increased litigation down the line.15 They have also pointed toward the potential for discrimination if these systems in the end codify previous bias or discriminatory practices by case officers or by political determinations on what constitutes a “risk” (say “migration risk”) in the first place.16
Even before the digital era, “risk assessments” have played a role in visa application processes. Some groups of countries are deemed “safe”, placed on white lists, and eligible for visa-free travel. This has always been an area involving a degree of discretion and secrecy on the parts of governments, consulates, and individual case officers. When automating these decisions via algorithmic tools, the question of what constitutes a “risk” or what type of categories to employ when processing for visa purposes raises new or amplifies existing socio-political questions of migration, human mobility, discrimination, and inequality. The question of ADM systems is then whether to begin scaling and codifying decisions on what has always been an inherently political decision-making process. The digital era has added yet another layer for consideration: the vast increase in the collection of biometric, personal, and other digital data directly tied to individuals raises significant data privacy concerns in addition to procedural fairness worries. Together, they form a key building block of the digital borders that are increasingly governing human mobility—borders that are no longer tied to territorial borders but instead can be activated almost anywhere at any time by forces unseen and often opaque. Core questions surrounding the use of ADM systems on visa processing must ultimately be situated within this bigger context.
Case Study
Expand AllCanada, the United Kingdom, and the EU
Immigration, Refugees and Citizenship Canada (IRCC) is exploring, developing, and piloting various use cases of ADM systems in visa processes to assist with managing an increased caseload of people wanting to visit or immigrate to Canada. A pilot project that began in 2018 makes automatic decisions about a portion of temporary resident visa applications from China and India. The system automatically triages applications into three separate categories based on their complexity. The system then automatically approves the eligibility portion for “the most straightforward” ones (category 1); refusals are not automated, and officers assess the admissibility portion of each application and render the final decision.17
As part of a “blind” quality assurance test, 10 percent of the approved applications are also given to visa officers to test each day, which, according to IRCC itself is 99 percent concurrent.18 As part of the machine-learning training process of the algorithm, a human-in-the-loop reviews and approves the “rules” the model suggests. As of September 2021, IRCC has stated that it is not currently automating decisions for asylum, humanitarian, and compassionate cases, or pre-removal risk assessment, and that it does not use “black box” algorithms (where decisions cannot be knowable or explainable).19 This may change in the future.
In the fall of 2020, the United Kingdom temporarily stopped the use of a “visa application streaming algorithm” after a legal challenge alleging bias and discrimination by digital and human rights groups, the Joint Council for the Welfare of Immigrants, and Foxglove Legal. Since 2015, the algorithm had triaged applications via a “traffic-light system,” which places visa applications for student visa or visits into three distinct “risk” categories.20 The legal challenge was that of bias in the algorithm assisting decisions, namely in that nationality was used as one component to determine the risk score (though the government rejected the bias claim). It said it would “redesign the algorithm excluding nationality as a sorting criteria and use “person-centric attributes” (such as “evidence of previous travel”) to help sift some visa applications, further committing that “nationality will not be used.” The government further stated that “the fact of the redesign does not mean that the [Secretary of State] accepts the allegations in your claim form [that is, around bias and the use of nationality as a criteria in the streaming process].” 21
The EU is poised to introduce its European Travel Information and Authorization System (ETIAS) for the 26 countries of the Schengen area in 2022—an electronic pre-screening system of passengers from states eligible for visa-free travel to the EU (similar to the U.S. Electronic System for Travel Authorization (ESTA) system). It will be interoperable with a new Entry-Exit System (EES) as well as with other EU databases through the EU-LISA system. A study22 commissioned by the European Commission outlines several use cases where ADM systems could support human decision-making as part of ETIAS as well as in other forms of short- or long-term visa processing. As in the cases outlined above, ADM could be used to assist with individual rapid risk assessment, categorizing applications, or gaining new insights from past data or by combining different data sources. It is important to note that at this point in time, these use cases are still hypothetical and it is unclear at which point enough data (say generated through the new EES or ETIAS itself) would in fact make many of these use cases feasible. Together, they illustrate the current thinking and practice by governments, which is why we include them.
Specifically, the ETIAS screening process will include the development of “risk indicators.” The report states that “AI could support in selecting these indicators and possibly adapting them over time depending on the applicant.”23 The ETIAS regulation has already included an oversight body that will be involved in their selection and creation.24 Multiple further use cases envision individual risk-assessment models, similar to the ones employed in Canada or the United Kingdom. The report mentions that this could also be used by combining metadata (such as data referring to situations in certain countries) with personal data.25 It also lists examples of unsupervised machine-learning systems to either group applications into new categories (unlearnt similarities), for instance “based on similar occupations or combination of factors”)26 or to gain new insights from data like “detecting irregular travel patterns”, for example by “unsupervised uncovering of correlations between travel destinations.”27
What to watch
Motivation — Why was the ADM system developed?
In visa processes, the driving motivation by governments is to streamline and make processes faster and more “efficient”, for example by classifying or grouping certain applications automatically, to screen for various “risk” categories related to travel and immigration for national security purposes, or to find new insights or patterns that in turn can affect policy responses. Theoretically, this could also make decisions more consistent. Criticism of visa practices often question the motivation of governments and argue that governments want to apply a veil of a seemingly “neutral” and technical ADM process to obscure (or encode) discriminatory practices that are biased against people of certain nationalities or ethnic groups. Via ADM, governments can codify a certain predisposition toward foreigners, migrants, or refugees into the visa process.
Action — What action does the ADM model trigger?
The “action” a model implies can be related to different stages of the visa process: is an application simply grouped together with others (say, based on levels of complexity, or perhaps based on occupation when processing labor migration applications), or does it actually automate a decision on a visa? In this case, an automatic approval, (like in the Canadian test case) is arguably different than an automatic rejection. If it involves some sort of automatic “flagging” at border crossings (or “triage at border sites”),28 the action of inviting someone to a second line of questioning is arguably different than outright denying entry (though here, too, concerns of codifying systematic discrimination apply).
Data — Which data sources were used?
Judging the effectiveness and procedural justice of ADM-supported visa issuance will hinge on questions regarding data: whether a supervised system is trained on past visa applications, as in the example from Canada (where it could simply reproduce biases of past visa decisions); whether it pulls in data from multiple data sources, for example drawing in regional- or context-specific data. And, if new data is automatically incorporated as part of a machine-learning model, whether there is a way to control for bias or the quality of the data when a model draws in data from visa applications managed by other countries. For example, in June 2020 the “Five Eyes” countries set up a new Secure Real Time Platform to share biometric and biographic information for immigration and border management purposes. One could imagine a machine-learning ADM system that begins drawing in dynamic data from such shared data platforms, including visa decisions of other countries.29 Finally, as in all AI and machine-learning contexts, the output of the ADM system will hinge upon how “messy” training data is, for instance different phonetic spellings of same names or conflation of similar names.
Another important question is whether governments will incorporate social media or other meta-data of individuals within their ADM models (as the U.S. Immigration and Customs Enforcement had proposed in its extreme vetting initiative). There is currently no public information about which data sets governments are using or testing in the area of risk assessment for visa purposes, which, for all the reasons outlined above, is extremely problematic. Public scrutiny of the data involved in these processes is essential for accountability.
Accuracy and Efficiency — How accurate are the predictions of the model and how do we evaluate efficiency?
Both use cases most relevant for visa processes—classification or risk indicators—in essence will work with predictive values or probabilities, and hence must be treated as such. A system classifying an applicant based on such scores is thus always one where a percentage of people will be misclassified. Arguably, this would be more problematic if ever negative visa decisions would be automated (to date no government has stated it intends to do so). In either case, it remains to be seen where this would clash with legal and other principles related to procedural fairness. Assessing improvements in visa processes would also need to include the baseline—what happened before any ADM components were introduced? A determination on accuracy could also include a check such as the one Canada has introduced: a blind quality assurance on a percentage of cases. Finally, any claim about efficiency could change as the model is implemented: if decisions are open to litigation because of process problems or claims of discrimination, then it is actually not very efficient.
Human-in-the-loop — How is a human involved in the decision-making process?
In the Canadian test case, a human is involved in checking the rules the algorithm suggests for making decisions. Another question is whether a human also checks for changes in context, such as country-specific changes, to monitor or adjust an algorithmic model—even where positive decisions are automated. An important question that will require further research is how using an ADM system will affect decision-makers themselves. For example, even if a human is supposed to make a final decision, say before rejecting a type of visa, the very presence of some sort of an automated classification or risk score could significantly influence that decision (automation bias). Choosing whether the output variable on individual risk assessment should be a risk score versus a classification scheme is a case in point and would most likely lead to individuals being assessed differently by human case officers. As the EC report evaluates: “a classifier is probably the most sensible approach, as training a regression model to predict some kind of score…instead might be seen as attempting to directly predict a risk level.”30 Finally, even a human-in-the-loop could not correct for potentially discriminatory or biased decisions of a model (see motivation section above), if a human is simply there to perform a perfunctory check as part of a seemingly “neutral” decision-making system.31
Bias and Discrimination — Is the model checked for bias and/or could it perpetuate systemic discrimination?
There is significant risk of bias of the training dataset, as noted above (see data section), as well as opportunity for bias to enter during implementation. Finally, further unintentional discrimination could be caused by using certain data. Ann-Charlotte Nygard of the EU’s Fundamental Rights Agency has pointed out two risks regarding ETIAS: “first, the use of data that could lead to the unintentional discrimination of certain groups, for instance if an applicant is from a particular ethnic group with a high in-migration risk; the second relates to a security risk assessed on the basis of past convictions in the country of origin. Some such earlier convictions could be considered unreasonable by Europeans, such as LGBT convictions in certain countries.”32 As data is gathered from a world rife with systemic discrimination, and as more and more digital data is collected on individuals, and as these different data sets may be increasingly willingly or unwillingly shared, all data must be carefully analyzed for bias. It would further be important to carry out an impact assessment for potential discrimination (which would also concern data sharing agreements between countries). These checks will become both more difficult as well as more urgent, the more data there is.
Governance — What are options for recourse, oversight, and monitoring?
Governance questions in visa processes will be crucial for democratic societies, as there is likely to be recalcitrance by many governments to fully disclose how certain risk assessments or classifications are made on visa cases. There may be very good reasons for doing so (for example, ADM systems themselves can be vulnerable to exploitation by criminal groups if a way to circumvent an automatic system is found), but options for individuals regarding information on the basis of a given decision (Why was my visa denied? Why was I stopped at the airport? Was there an algorithm involved in the decision? Where do I turn to with these questions) are important components of an ADM system. Further, different groups affected may end up having differing access to recourse, reflecting existing inequalities (say related to wealth, if only wealthy people can afford legal support to contest decisions) or simply related to informational inequity between the issuing government and the public. Precisely because of the lack of public transparency and the national security cloak that is part of visa processing, independent oversight and monitoring bodies, as well as impact and monitoring mechanisms to screen for potential harm and negative or discriminatory impacts will be crucial if governments and public authorities want to maintain trust in the functioning of the processes and systems.
REPORT: AI, Digital Identities, Biometrics, Blockchain: A Primer on the Use of Technology in Migration Management
Digitalization and technological change are rapidly transforming every aspect of our societies and economies, and the migration and refugee policy space is no exception. Technology is already affecting migrants, refugees, and people on the move in many ways, but policymakers have yet to systematically address the different uses of technology in the migration management field.
B. Matching for integration
A second example of the use of algorithmic systems in migration management is that of systems that match refugees arriving via resettlement or asylum seekers with “optimal placement,”33 that is, locations or communities in which they are most likely to find employment, with the main aim to “improve integration” of refugees through “algorithmic assignments.”34 When refugees and asylum seekers arrive in a host country, several factors can greatly impact their well-being as well as integration35 and inclusion trajectories. For example, whether adequate housing is available, or whether they can find employment and make use of education opportunities, or if they can rely on existing support networks and are welcomed in a non-discriminatory environment to live in.
Normally, resettlement refugees or asylum seekers are distributed “manually” by administrators or reception staff to specific geographic locations, often based either on existing capacity of the community to host, or based on predefined allocation rules, say to individual states.36 Usually, there are special considerations or exceptions to these rules for family reunification. Assignments can be a very time-consuming process in what is already a field with often limited staff capacities, depending on the host-country situation and distribution system used. It is here that machine learning can be part of software packages (alongside mathematical integer optimization systems)37 that could, ideally, make better placement suggestions leading to individuals to be able to thrive in a more welcoming environment with greater options. This can free caseworkers to attend to other tasks or placements and benefit host communities with better integration outcomes, which ultimately contribute to less costs that need to be spent on integration measures as a community.
There are currently two main use cases in trial phase—the Annie MOORE system employed in the United States and GeoMatch by the Immigration Policy Lab, tested both in Switzerland and the United States. As of May 2021, several German states have also just started a three-year exploration phase38 and other countries are interested in adapting such models to their own processes.
Critics of these matching projects point to the fact that the projects so far are only optimizing for employment outcome to define “optimal” locations, not taking into consideration other factors, including the preferences of refugees or asylum seekers themselves (though it must be noted that this is currently not the case either in non-ADM systems).39 Another point of concern raised is whether such systems could in turn exacerbate inequalities between refugees, if those refugees whose profiles are less promising to find employment are matched to locations which might not be well-off, hence potentially leading to “perpetuating cycles of poverty.”40 Finally, by establishing the current approaches and models, they potentially create path dependencies that will be (politically) difficult to take back or adjust. For example, why should refugee preferences even be considered going forward, if existing matching algorithms that focus on one-sided matching and employment “work” for resettlement agencies and governments alike?
The developers of current matching algorithms have addressed some of these concerns by pointing to both technical and procedural logic: for example, while including preferences in a matching model would be “theoretically appealing”, a lack of systematic data on refugee preferences currently prevents such a two-sided matching approach and would also require “extensive political coordination.”41 Resettlement agencies in the United States, for their part, are tasked to maximize the number of refugees who are employed within 90 days of arrival42—which could be another reason software would focus on improving employment outcomes, not least to keep government and ultimately societal approval for resettlement programs. As these examples show, even seemingly straightforward matching algorithms are embedded in a socio-technological framework and decisions regarding their use and potential implications must include grappling with these difficult questions.
Case Study
Expand AllAnnie MOORE and GeoMatch
The Annie MOORE (Matching and Outcome Optimization for Refugee Empowerment) system is named after the first immigrant to come through Ellis Island in the United States. It was started in 2018 as a collaboration between academia, the non-profit resettlement agency HIAS and the U.S. Department of State. It recommends locations where newly arrived resettlement refugees are most likely to find employment based on their profile. Additional optimization models help to further refine those locations to see where newcomers can find proper child support or language assistance if needed. The system runs on open-source software and the developers maintain it can be updated, adjusted, and replicated in other contexts.
The GeoMatch Algorithm, developed by the Immigration Policy Lab and tested since 2020 in the United States and Switzerland, works similarly. In the Swiss case, previous placement of asylum seekers was randomly assigned to Swiss cantons. The model was backtested on past data involving personal characteristics of asylum seekers (work history, education, personal characteristics) and placements to cantons. GeoMatch predicts the likelihood of finding employment at various locations in the host country and recommends the “optimal” location for the newcomer. A human caseworker can confirm or change this recommendation. GeoMatch in Switzerland is jointly coordinated between the Immigration Policy Lab, stakeholders in the placement process, and the Swiss State Secretariat for Migration (SEM).
What to watch
Motivation — Why was the ADM system developed?
Both matching algorithm model developers have described their approach as “refugee-centered” or “human-centered” with the ultimate aim to help newcomers and receiving communities thrive due to better refugee employment outcomes rather than the capacity of the location or a set distribution key. They also seek to make decisions with many variables faster, thus freeing up staff or other resources for governments or placement agencies. Currently being discussed is if and how such matching algorithms could be extended and used for integration categories other than employment, and also for user categories other than resettlement refugees or asylum seekers, that is, economic migrants.43 Further machine-learning research could also compare the locations of where refugees and asylum seekers thrive to detect patterns between them that might have not been on the human radar screen. However, there may be unintended consequences, as described above: if a political decision (not to take preferences of refugees or asylum seekers into account when geographically placing them in communities) is then codified into a seemingly neutral algorithm, making it potentially harder to change the system in the future.
Action — What action does the ADM model trigger?
The models recommend resettlement locations to administrative staff, based on where refugees or asylum seekers are predicted to most likely find employment.
Data — Which data sources were used?
Both models were trained on anonymized data of past resettlement refugee or asylum seeker profiles, provided either by a resettlement agency or the SEM in the case of Switzerland. For their model, Annie MOORE used 2010–2017 employment outcome data of refugees resettled in the United States 90 days after their arrival. It only used data of refugees with no prior family ties in the United States, as newcomers with prior family ties are usually sent to the location of their family. With both models, the machine learning detects patterns of the anonymized refugee profiles to see what aspects make them likely to obtain employment in a set community. Both models point out that other variables—e.g., longer-term employment figures, education, and household earnings—could be included if they become available systematically and in sufficient volume. So far, both models do not build in data on the location preferences of refugees themselves (e.g., if they prefer warmer versus colder regions, urban versus rural, etc.) as these are not collected systematically.
Accuracy and Efficiency — How well do these ADM systems support human decision-making?
Developers and users of both models attest to increased efficiency and better employment prospects. However, to the best of our knowledge, there has not been independent testing or checking of these claims. GeoMatch's code is public and the data can be requested with the SEM for research purposes. Annie Moore users state it arrives at a result/preference for optimal location six times faster than a human.44 Karen Monken, director for pre-arrival and initial resettlement at HIAS, states that “the effectiveness of my operations has increased dramatically. I now spend 80 percent less time on routine matching and can focus my time and energy on the more difficult cases such as those with significant medical conditions, ensuring that their placement is as good as possible.”45 At the same time, employment chances are said to be 30 percent higher over manual placement.46 According to the developers of GeoMatch, the algorithm produced results that improved the employment prospects of refugees in the United States by about 40 percent and in Switzerland by 75 percent.47 The Swiss model includes an evaluation and has included a double-blind randomized control trial which will provide further results.
Human-in-the-loop — How is a human involved in the decision-making process?
Caseworkers have the final say about the placement of a refugee or asylum seeker. As in other ADM systems, “decision fatigue” and “automation bias” have to be taken into consideration where humans more or less blindly rely on the recommendation of the algorithmic model.
Bias and Discrimination — Is the model checked for bias and/or could it perpetuate systemic discrimination?
There is no public information available on whether the training of these models involved checking for bias, nor whether the models were tested for potentially discriminatory results.
Governance — What are options for recourse, oversight, and monitoring?
A human caseworker makes a final decision. Hence, technically speaking, recourse can be directed at their agencies if the resettlement or asylum distribution system in the country allows for it. However, general questions remain: Were subjects whose data was used for the training model informed or asked for permission even if their data was anonymized? Were refugees or asylum seekers to be placed informed about the use of an algorithmic support system for a placement suggestion? How important is that considering that a final decision on placement is made by a human caseworker the same way as if this had been a fully manual process without algorithms and where the caseworker also does not a priori explain how she arrived at the decision?
C. Assisting planning and preparedness with early-warning and forecasting
Several new AI-powered models and systems are currently being developed and tested by government agencies or humanitarian actors in order to better anticipate, prepare, and plan for various aspects of human mobility, displacement, or migration management.48 Depending on the model, the full power of current machine learning-based data analytics is used to combine and analyze vast amounts of data, ranging from datasets on conflict to weather patterns, administrative data, and georeferenced data points, including data by UN agencies, the Internal Displacement Monitoring Center (IDMC), or the World Bank, to name just a few. Attempts to better predict human movement across borders in and of itself is not new—forecasting models are traditionally based on quantitative and statistical methods—but the hope is that the sheer volume and type of available data combined with data analytics can open up new or more accurate predictions. Ideally, the output of these models can then be used to assist allocation decisions related to administrative or financial resources, and the formulation of specific policy options. It could help governments or other actors better prepare and plan financial decisions, and focus diplomatic efforts or human resources. This could include both humanitarian or border enforcement purposes.
The scope of these models can vary: they can be quite specific, such as attempting to predict the number of asylum applications, or as part of bigger risk-assessment or early crisis-warning systems, such as the PREVIEW project being developed by the German Federal Foreign Office.49 In 2020, the European Commission commissioned a study to “assess the feasibility of developing an AI-based tool for migration forecasting for the Commission and EU Agencies” on models forecasting irregular migratory movements at the EU borders for up to three months in advance.
The European Asylum Support Office (EASO) has developed an early-warning and forecasting system “that allows for the prediction of migration movements for a period of up to three weeks in advance, taking into account internet searches and global media at the countries of origin.”50 These models and systems are also increasingly informing the work of international organizations or humanitarian agencies, such as the Danish Refugee Council.
Those concerned with governments or other actors using these types of models often question the motivation of certain actors in developing this tool or point to the danger of models used in one context, say in the humanitarian sector, being co-opted and used for other purposes, say in border enforcement. A further concern is the danger that, as in other cases, predictive analytic tools are simply used to cloak an inherently political agenda, such as restrictive immigration policies, under a seemingly neutral technological tool. Finally, where the development of these tools is combined with data collection of people on the move, there are concerns of human rights violations or issues related to the use of certain types of data (for example, if it would inadvertently reveal the location of certain groups or collect sensitive information that can be tied back to individuals).51
Importantly, the evaluation of ADM systems within the early-warning and forecasting fields entirely depends on the context within which they are employed and the purpose they are supposed to serve. They can vary on how directly a human decision is affected (does X lead to Y automatically? Or are the outputs of a tool embedded in a much more indirect way in reports or decision-making processes?). The policy responses based on any system could equally diverge fundamentally: one warning could lead to sending emergency aid, or to closing borders, or to both.
As more governments and other actors employ these models to inform their decisions—in budgetary processes, in responses to emergency situations, in border management, or in order to employ proactive measures (as opposed to reactive policymaking) such as crisis prevention or mitigation—the more important these new techno-social implications will be.
Case Study
Expand AllForesight Software by the Danish Refugee Council
The Danish Refugee Council (DRC) together with IBM Research have developed a forecasting software, called Foresight, aimed at “supporting humanitarian planners and decision-making practices around allocating resources, understanding causality of events, and informing humanitarian aid efforts on the field for ‘forcibly displaced peoples’.”52 The machine-learning-based model analyzes historical data from over 120 sources and the effects of different variables, for example political or economic situations or crisis- or climate-related data. The timeframe the model seeks to analyze is anywhere from one to three years. In developing the model, the team based the use cases on scenario work that involved interviews with humanitarian aid planners. The system is used to predict forced displacement in different regions.53 It was a conscious decision not to include as an output whether this displacement will be internal or across borders (that is, it does not predict where people will move to).
It is further explicitly designed to assist with three “decision points” that humanitarian aid workers are often faced with:
- “surfacing causality” of events after they occur to better anticipate future events. This is done by looking at past data to see, for example, if new data reveals potential reasons for why people left their homes;
- creating a common analysis space which allows users to share analysis and to upload their own, local, or agency specific data and addresses the issue local data humanitarian actors often find themselves in, where data is often not shared between agencies, either due to data protection or technical reasons; and
- “balancing risky situations” which refers to the assessment of regions for deploying aid workers in the field, or to mitigate proactively the forecast displacement.
The target users of the software are humanitarian workers who are able to interact and make adjustments to the AI system, such as by feeding context specific analysis to the model or manually changing different indicators to make scenario-based forecasts.
What to watch
Motivation — Why was the ADM system developed?
Forecasting models related to human mobility or displacement can have many different goals. Typically, they are presented as increasing efficiency of resource allocation. Ideally, they serve the purpose of informing policy choices with better evidence, adding another component in the decision-making process related to migration policy. Charlotte Slente, general secretary of the DRC, has stated that “the tool helps us predict more quickly what will happen so that we can plan better and intervene earlier in a humanitarian crisis …Rapid intervention, she adds, often saves money in the long run.” 54 These systems could, of course, be motivated by other factors, including more sinister ones, and thus the motivation behind a given model would be a key socio-technological component when evaluating their development or use.
Action — What action does the ADM model trigger?
Which action forecasting models could trigger or assist in triggering is directly related to the socio-political context and the motivation of decision-makers. As mentioned, the same model trying to assess displacement across borders in a certain region could be used for two diametrically opposed purposes or both: for example, to prepare humanitarian responses or to close a country's borders. It could also involve either reactive responses or moving to proactive responses such as crisis-prevention measures. It could also have very immediate and direct (perhaps almost automated) impact on allocating financial and human resources. A use case illustrated in the report for the commission, for instance, envisages machine learning to predict passenger flows to increase staffing and border guards.55 There are also risks of unintended consequences or of actions stemming from a given model: for example, mobility patterns of certain groups could also be misused by political opponents or authoritarian regimes.
Data — Which data sources were used?
Even with good data, migration is complex and manifold (from forced displacement or asylum to family reunification, etc.) and pertains to the aspirations of individuals. Furthermore, there is limited data to train and work with when using forecasting models to assist with decisions. There is often not enough local or timely data, missing data, unreliable data, or a lack of sharing, say, between different humanitarian agencies. There are further data inconsistencies that lead to higher error rates.56 There are other ethical or legal considerations regarding the data sources for these models. For example, the German PREVIEW project only uses publicly available data. An EASO model that involved social media data was stopped due to legal considerations.57
Accuracy and Efficiency — How well do these ADM systems support human decision-making?
In two test cases concerning Myanmar and Afghanistan, the algorithm in the DRC Foresight model came up with rather precise forecasts, with an error range of 8 to 10 percent, according to DRC itself. As of September 2021, the model has been applied to 24 countries with an average margin of error of 21 percent.58 However, these models are not good at predicting sudden changes or crises or “black swan” events—low probability events that have a very high impact on migration.59 Algorithmic-based systems often cannot screen for important contextual information, for example, as part of the EASO forecasting tools being developed, it turned out that an increase in search terms in Tunisia for “Italy”—which could have given some indication related to migration intention—correlated perfectly with Italian football league matches. In general, people working with these systems agree that these models may be useful for short-term predictions but not for longer-term ones.
Human-in-the-loop — How is a human involved in the decision-making process?
The DRC model sees end users as being trained on the software before using it in order to interpret, interact, or combine their own data sources with it. In the design of the software, engineers started with qualitative interviews to choose the decision with which the model could assist. Beyond involving end users in the development of the model itself, the results of early-warning systems require “experts in the loop” and people who can interpret the results of a given model for migration decision-makers and policymaking.
Bias and Discrimination — Is the model checked for bias and/or could it perpetuate systemic discrimination?
In the case of early-warning systems, this would entirely depend on which data sources were used and toward which end. The action and policy choices a given system leads to should systematically be checked and pre-assessed to avoid potentially discriminatory results against certain groups or individuals.
Governance — What are options for recourse, oversight, and monitoring?
Questions of oversight and monitoring of the model in developing complex early-warning or forecasting systems include how the system was first developed, whether impact assessments are part of the process, and, if unsupervised machine learning is involved, mechanisms to better understand decisions. For better oversight, it would also include that end users are trained on a model and whether people checking on the outcome and machine learning of an algorithm are familiar with the context within which it is employed. Finally, it involves checking the concrete impact that the resulting action has on implementation.
4. Next steps for migration policy actors: Catching up … and how to get there
As the issues raised in these use cases demonstrate, there are considerations and implications that go beyond the merely technical when trying to understand and evaluate the use of ADM systems in migration policy. Instead, the socio-technological framework including fundamental assumptions and values that guide our current migration and human mobility system as a whole are a necessary part of any such evaluation. As in other areas, the development and implementation of ADM systems has far outpaced the ability of policymakers, regulators, and the public at large to keep pace. The context in which ADM systems will be employed in the migration and refugee space will thus ultimately hinge on the development of new technologies themselves, broader public discourse regarding their use, and emerging regulations. It will be important for migration policy actors to better and more directly bridge these emerging developments to the tech and digital rights space.
Catching up…
Bridge migration policy with developments in digital rights and tech regulation
As part of these developments, an important and active civil society space on digital rights has emerged in Europe and the United States, contributing to a growing awareness of the massive rollout of ADM systems in the public space and fostering a more detailed debate on their potential impact, along with calls and ideas for governance and regulation. Examples include the increasing calls for moratoria or outright bans on certain technology, like facial recognition in public spaces. The UN Office of the High Commissioner for Human Rights, meanwhile, has recently issued a new report pointing to potential threats to human rights.60 Some governments are setting their own frameworks to guide their use and development of such systems, such as Canada’s Directive on Automated Decisions that includes an Algorithmic Impact Assessment Tool.
A new review board at Stanford was recently set up to test model risks during the development process (as opposed to after the fact), and new institutes are being created to explore how AI can be used to monitor and improve AI-based ADM systems.61 The EU Fundamental Rights Agency has also called for greater use of this option.62 In essence, drawing awareness to any potential or actual bias and discrimination created by these systems has ultimately brought to light underlying issues that were already present—a fact that may be replicated in the migration policy field.
On the regulatory side, the employment of ADM systems in migration policy—at least in Europe—will depend in large part on the final versions of the EU Artificial Intelligence Act and the EU Data Governance and EU Digital Services Acts, and are already subject to GDPR regulation. The first draft of the EU AI Act, for example, is based on a risk-based regulatory approach to AI systems and outlines four categories to varying degrees of risk (unacceptable, high, limited, and minimal risk), and clearly sets migration, asylum, and border control management in the high-risk category. This set of tools includes those to assess the risk (such as security risk, irregular immigration risk, health risk) of a person who intends to enter an EU member state territory, tools for the verification of travel documents, and tools to examine applications for asylum, visa, and residence permits.63 The most important regulations placed on high-risk AI systems are outlined in Title III, Chapter 2, Article 9-15, respectively and include clear guidelines that are relevant for migration use cases, too, including high quality datasets to minimize risks and discriminatory outcomes, human oversight, and clear and adequate information to users.64 What this means for the different areas of migration management will need to be spelled out and may even be different depending on the type of ADM system.
Adapt emerging policy tools for ADM to the migration space
A number of recent initiatives have offered good entry points for migration policymakers when it comes to the use of ADM systems specifically, by outlining key governance tools or categories for further development (see text box p.34). As the three use cases illustrate, these categories can offer valuable starting points to adapt to the migration space. For example, impact assessment tools should be required for all three: visa cases, matching procedures or assisting decisions in resource allocation, or political planning. There should also be a discussion of whether the use of ADM systems should be entirely banned for certain cases, or whether only a certain type of model is acceptable. Black box models for decisions on visas or legal status determinations of persons, for instance, should be prohibited; it must also include a discussion on how to balance the highly opaque and discretionary nature of migration policy decisions with the need for transparency, monitoring, and accountability processes at certain points in the system, both to maintain trust and avoid grave harm. This would necessarily include some type of independent and external oversight or certification bodies.
Recent Calls for Regulation
Expand AllAlgorithm Watch has called for the following steps regarding ADM systems (65):
- Increase the transparency of ADM systems (public registers and data access frameworks).
- Create a meaningful accountability framework for ADM systems (audits and civil society watchdog, and banning facial recognition).
- Enhance algorithmic literacy and strengthen public debate on ADM (through centers of expertise, and an inclusive and diverse debate of ADM system).65
The recent Algorithmic Accountability for the Public Sector report, by the Ada Lovelace Institute, AI Now Institute, and the Open Government Partnership, has synthesized different categories of tools for public accountability policies along the following
- Principles and guidelines
- Prohibitions and moratoria
- Public transparency
- Impact assessments
- Audits and regulatory inspection
- External/independent oversight bodies
- Rights to hearing and appeal
- Procurement conditions66
Finally, there is a whole psychological dimension related to how ADM systems affect decision-makers. This requires more research and special attention, with issues such as “decision fatigue” and “automation bias” essential to monitor, as humans tend to favor decisions of machines even if they might be faulty.67 Finally, it requires at least a basic level of awareness by the people adopting the systems of the many ways in which bias and discrimination can not only be reproduced but multiplied and then obscured by the data and its analysis.
In addition to the government directives and new regulations above, a next step would be to adapt these policy tools to different areas of migration and refugee policy more specifically.
…and how to get there
Create new spaces for exchange by trusted actors
Bridging the current disconnect between migration and refugee policy making, on the one hand, and new technologies including ADM systems, on the other, requires a more deliberate exchange between migration policymakers, tech regulators, technologists, and civil society. This could include official processes, for example intragovernmental exchanges, to standardize and monitor good practices, as well as more informal and trust building initiatives or track-two processes that can accompany new legislation or technological applications as they emerge.
Given the fact that migration implications per se always have an international dimension and often involve foreign policy considerations, new spaces for exchange should also be created internationally. Many of these technologies will be employed in countries without legislation such as currently emerging in places like the EU. It will thus be necessary to find ways for decision-makers, civil society, and other actors to discuss the potential impact of different AI and data jurisdictions and data sharing arrangements to actively shape a humane governance of human mobility across borders going forward. It should also include dedicated funding for universities to research these links and for existing digital rights or public policy institutes to expand their findings to the migration policy field.
Include discussion on the use of ADM systems in international migration fora
There are a number of international migration fora that should systematically include the use of ADM systems, and the use of new technologies more broadly, as thematic areas of focus. This could be done via the UN Network on Migration, as well as through the emerging implementation and regional processes related to the Global Compact on Migration, the Global Compact on Refugees, or the Global Forum for Migration and Development.
Increase the number of technologists or bilinguals working in migration policy
Developments at the intersection of migration and technology, and the implementation of ADM systems as outlined in this paper, require people that understand both the tech and migration policy spaces. To increase the linkages and exchanges called for above, migration policy institutes, public administrations, think tanks, and international organizations should make a conscious effort to employ more technologists as well as employ or develop more in-house “bilinguals.” Conversely, public policy and investigative institutes, as well as digital rights initiatives currently working on tech governance, regulation, or digital rights, could be further developed with a specific migration and refugee policy focus.
Link tech and migration policy to bigger questions of foreign policy and geopolitics
Finally, migration and refugee policy in today’s world is highly political and inextricably linked to foreign relations and policy considerations between states, regions, and even continents. As technologies employed in the migration space will be employed more frequently, it’s important to consider how their use intersects with questions of development policy and government “digital foreign policies”, and within the broader geopolitical shifts of our time. AI standard setting, in this regard, is far more than a technological exercise, but an active shaping of a new geopolitical world order. Shaping technological standards for human mobility, and the values that underpin it, is thus equally urgent on a geo-political level.68
5. Conclusion
Algorithmic decision-making systems in the migration space are bound to increase, while meaningful oversight and understanding of its long-term effects on the migration world are not yet present. There are future developments in the use of technology to keep in mind. With the testing of new technology and components, questions may emerge, such as the use of synthetic data sets for training purposes69 or the continued hype around emotion-recognition technology whose accuracy and implementation have been increasingly discredited.70 We also do not know how machine-learning-based results produced today may in turn be used to make decisions in the future (say when creating indicators). What are we laying the groundwork for and what are the long-term consequences of the systems we are building? This, too, requires a systematic and conscientious accompaniment of such new technologies by migration policy actors.
Ideally, ADM systems can make many aspects of managing migration in an ever-faster-changing world more efficient and responsive to issues such as displacement in humanitarian settings and contribute to secure, safe, and orderly migration that could ultimately benefit many individuals. They could also lay the groundwork for a human mobility system in which surveillance is a self-fulfilling prophecy, and in which existing inequalities and discriminatory practices are reproduced on a mass scale. Both scenarios, and most likely a messy hybrid of the two, are possible. To navigate these developments in migration and use the potential they offer while reigning in the great risk they can pose, requires the design of these ADM systems within the context of their broader socio-technological implications and the values on which we want them to be based.
The views expressed in this publication are the views of the authors alone and do not necessarily reflect those of the partner institutions.
Appendix
Expand AllFootnotes
1 Algorithm Watch, Automating Society—Taking Stock of Automated Decision-Making in the EU, 2019.
2 The agency would later, in May 2018, back down from developing this technology, but not before it had searched for industry contractors that could design such a system. As a response to the proposal, 54 known “experts in the use of machine learning, data mining and other advanced techniques for automated decision-making” issued a letter of “grave concern,” stating that “(s)imply put, no computational methods can provide reliable or objective assessments of the traits that [the U.S. Immigration and Customs Enforcement] seeks to measure” and that “in all likelihood, the proposed system would be inaccurate and biased.” It concluded that the approach was “neither appropriate nor feasible.”
3 The iBorderCtrl project was mostly criticized for testing what was called an Automated Deception Detection System, an AI based system/ Avatar trying to detect if a (test) person at a border check was telling the truth by observing non-verbal behavior. See here for more information: iBorderCtrl? No! | iBorderCtrl.no and CORDIS Horizon2020 iBorderCtrl Research Results.
4 Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, 2021, p.9.
5 Supervised machine learning approaches use labeled data sets which are pre-labelled by humans, or they use an unsupervised machine learning approach using unlabeled data with the aim to detect patterns and generate new insights or user profiles (e.g. customers who bought article x also bought article y).
6 ADM is mostly used to support decision-making in prioritization, classification, association and filtering; these processes can also build up on each other. For more info see: Algorithmic Power · Algorithmic Accountability (gitbooks.io).
7 Data poisoning or model poisoning is an attack on the machine learning model’s training data to alter the prediction behavior of a learning model. For examples see: Ben Dickson, What is machine learning data poisoning?, TechTalks, 7 October 2020.
8 See for example: Reuters, “Amazon scraps secret AI recruiting tool that showed bias against women”, October 11, 2018; Alex Najibi, “Racial Discrimination in Face Recognition Technology”, Blog Post, Harvard University, accessed September 28, 2021.
9 Algorithm Watch, Automating Society, 2019.
10 Ibid.; Algorithm Watch, Automating Society Report 2020, October 2020.
11 There might be good reasons to use ADM, but just because a technology exists, it might not be the best tool to address a problem to be solved. See here for example of police department in Germany which decided against using machine-learning-based systems for predictive policing as other rules based tools provided similar results.
12 Max Roser, Tourism, Our World in Data, 2017.
13 Deloitte & European Commission, Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security- Volume 2, Addendum, May 2020., p. 18.
14 Canadian Border Services Agency, Digital transformation. Benefits, risks and guidelines for the responsible use of emergent technologies, Strategic Policy and Planning, March 2018.
15 Petra Molnar and Lex Gill, Bots at The Gate: Automated Decision-Making in Canada’s Immigration and Refugee System, 2018.
16 For a discussion on the politicization of data analytics tools, see Petra Molnar, EDRi and the Refugee Law Lab, Technological Testing Grounds- Migration Management Experiments and Reflections from the Ground Up, 2020
17 Lucia Nalbandian, Using Machine-Learning to Triage Canada’s Temporary Resident Visa Applications, Ryerson Centre for Immigration and Settlement (RCIS) and the CERC in Migration and Integration, July 2021; see also Immigration, Refugees and Citizenship Canada, Digital transparency: Automation and advanced data analytics, accessed Sept. 28, 2021.
18 Nalbandian, Using Machine Learning, 2021, p.8.
19 Authors exchange with IRCC in September 2021, see also EU Conference Presentation, Digital Transformation, Immigration, Refugees and Citizenship Canada (IRCC), 28 October 2020.
20 Jasmine Andersson, “Home Office to scrap algorithm which secretly assigns ‘risk score’ to some nationalities by design“, The i, 4 August 2020.
21 Natasha Lomas, “UK commits to redesign visa streaming algorithm after challenge to “racist” tool”, TechCrunch, 4 August 2020.
22 Deloitte & European Commission, Opportunities and Challenges, 2020, p.48.
23 Ibid., p. 54.
24 Visa-free non-EU nationals will have to apply to enter the Schengen area through the ETIAS system. The ETIAS Fundamental Rights Guidance Board (see Article 10 of the ETIAS Regulation) is responsible for assessing potential impact on fundamental rights in the application process ensuring that applications have been processed fairly, efficiently and securely. The independent advisory board passes on its recommendations to the ETIAS Screening Board.
25 Here, one use case envisions a risk assessment to be formed by “creating and flagging criteria for which an individual can appear “risky”, e.g. based on the individuals data, based on criteria relevant to an ongoing scenario or threat (e.g. subject matter experts expect an increased number of Venezuelan citizens migrating to Europe illegally due to the instabilities in the country), or developed from reviewing profiles (e.g. historical data indicates that single males around the age of 27 without any property in their home country are more likely to overstay ).” Cited in Deloitte & European Commission, Opportunities and Challenges, 2020, p.54.
26 Ibid., p.45.
27 Ibid., p.46.
28 Ibid., p.105.
29 “M5 Members do not automatically share facial or biographical data like names or personal details unless they match a fingerprint. When one member asks for information from another, the receiving country destroys the fingerprint after the match request.” Quoted in Gill Bonnett, “How the Five Eyes countries share immigration data”, Radio New Zealand, 30 December 2020.
30 Deloitte & European Commission, Opportunities and Challenges, 2020, p. 56.
31 Petra Molnar and Lex Gill, Bots at the Gate, 2018.
32 Quoted in: Algorithm Watch, Automating Society, 2020, p.27.
33 Narges Ahani et al.,”Placement Optimization in Refugee Resettlement,” Operations Research, 2021.
34 Kirk Bansak et al., Improving refugee integration through data-driven algorithmic assignment, Science, 19 January 2018. Designers of those algorithmic matching systems use descriptions such as “optimal” location or “improving integration” through employment often without a contextual analysis of why they deem this approach optimal or how they would define integration.
35 We are aware that integration is in itself a loaded term and concept and that integration is a multidimensional two-way process that cannot be captured in one or two data points. For more information see for example, the course of Migration Matters, Rethinking ‘Us’ & ‘Them’: Integration and Diversity in Europe.
36 Germany, for instance, as part of its distribution strategy resorts to the “Königsstein Key”, an annually calculated distribution quota that assigns asylum seekers to the federal states. The quota is based on population numbers and financial situation (tax revenue) of the federal states.
37 Integer optimization systems refer to a method of mathematical optimization that restricts variables to integers.
38 Universität Hildesheim—Migration Policy Research Group, Match'In—Pilotprojekt zur Verteilung von Schutzsuchenden mit Hilfe eines algorithmengestützten Matching-Verfahrens, May 2021.
39 Hai Nguyen et al., Stability in Matching Markets with Complex Constraints, Management Science, 2021.
40 Petra Molnar and Lex Gill, Bots at the Gate, 2018.
41 Kirk Bansak et al., Improving refugee integration, 2018.
42 Hai Nguyen et al., Stability in Matching Markets, 2021.
43 Mohanad Moetaz, “New tool could help immigrants decide where to live in Canada“, Canada Immigration News, 25 March 2021.
44 Augusta Pownall, “Annie MOORE algorithm matches refugees to best-suited US cities“, 22 August 2019.
45 HIAS, “Implementation of Annie™ MOORE at HIAS”, Support Letter, 20 February 2020.
46 Observatory of Public Sector Innovation, Annie™ MOORE (Matching for Outcome Optimization and Refugee Empowerment), 16 September 2020.
47 Kirk Bansak et al., Improving refugee integration, 2018.
48 Jessica Bither and Astrid Ziebarth, AI, Digital Identities, Biometrics, Blockchain: A Primer on the Use of Technology in Migration, German Marshall Fund, Robert Bosch Stiftung, Bertelsmann Stiftung, June 2020.
49 German PREVIEW Project: German Federal Foreign Office, Krisenfrüherkennung, Konfliktanalyse und Strategische Vorausschau, 7 February 2020.
50 Migration Data Portal, EASO’s Research Programme: Using big data from global media to predict migration flows, 26 May 2021.
51 See for example, Petra Molnar, “Technology on the Margins: AI and global migration management from a human rights perspective”, Cambridge International Law Journal, 2019.
52 Josh Andres, Christine T. Wolf, Sergio Cabrero Baarro, Erick Oduor, Rahul Nair, Scenario-based XAI For Humanitarian Aid Forecasting, 2020, p. 2.
53 Danish Refugee Council, Global Displacement Forecast 2021, July 2021.
54 Sonja Peteranderl, “Predicting Refugee Movements? There's an App for That”, Spiegel International, 23 September 2020.
55 Deloitte & European Commission, Opportunities and Challenges, 2020, p.106.
56 International Organization for Migration, The Future of Migration to Europe: A Systematic Review of the Literature on Migration Scenarios and Forecasts, 2020, p.48.
57 Crofton Black, “Monitoring being pitched to fight Covid-19 was tested on refugees”, The Bureau of Investigative Journalism, 28 April 2020.
58 According to Alexander Kjaerum of DRC, in the countries where the model performs best it can forecast with an average margin of error down to 6-8 percent (Guatemala, Afghanistan, Colombia, Central African Republic). The model performed worst in newer crises such as Venezuela (42 percent average margin of error) and Mozambique (44 percent). Out of the 153 forecasts made so far, approximately half have had an average margin of error of 10 percent or below. See also: Danish Refugee Council, Global Displacement Forecast 2021, 2021.
59 United Nations Network on Migration, Deep Dive “Migration 4.0” on “Forecasting”, 5 November 2020; Julia Lendorfer, Predictive Migration Approaches in the EU, Presentation to the European Migration Network, November 2020.
60 United Nations High Commissioner for Human Rights, The right to privacy in the digital age, 13 September 2021.
61 Cade Metz, “Using A.I. to Find Bias in A.I.”,The New York Times, June 30, 2021.
62 EU Agency for Fundamental Rights (FRA), #BigData: Discrimination in data-supported decision-making, 30 May 2018.
63 European Commission, Proposal For A Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts, 21 April 2021.
64 European Commission, Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, Press Release, 21 April 2021.
65 Algorithm Watch, Automating Society, 2020.
66 Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector, 2021.
67 Christopher Wickens et al., “Complacency and Automation Bias in the Use of Imperfect Automation”, The Journal of the Human Factors and Ergonomics Society, 2015.
68 See for instance: Philippe Lorenz, AI Standardization and Foreign Policy—How European Foreign Policymakers can Engage with Technical AI Standardization, August 2021.
69 Another development per the 2021 Tech Trends Report to monitor and see how it might impact the most vulnerable, is data synthesis and data simulation. To overcome the challenge of data scarcity, AI researchers use already collected and synthesized data to create “brand new data”. This makes monitoring the base data set for unwanted biases ever more important. See Future of Today Institute, 2021 Tech Trends Report, Artificial Intelligence, 2021, p.15.
70 Kate Crawford, “Time to regulate AI that interprets human emotions“, nature, 6 April 2021.
Bibliography
- Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector, 2021.
- Algorithm Watch, Automating Society—Taking Stock of Automated Decision-Making in the EU, January 2019.
- Algorithm Watch, Automating Society Report 2020, October 2020.
- Narges Ahani et al., “Placement Optimization in Refugee Resettlement”, Operations Research, 2021.
- Jasmine Andersson, “Home Office to scrap algorithm which secretly assigns ‘risk score’ to some nationalities by design“, The i, 4 August 2020.
- Josh Andres, Christine T. Wolf, Sergio Cabrero Baarro, Erick Oduor, Rahul Nair, Scenario-based XAI For Humanitarian Aid Forecasting, 2020.
- Kirk Bansak et al., Improving refugee integration through data-driven algorithmic assignment, Science (sciencemag.org), 19 January 2018.
- Jessica Bither, Astrid Ziebarth, AI, digital identities, biometrics, blockchain: A primer on the use of technology in migration management, German Marshall Fund, Robert Bosch Stiftung, Bertelsmann Stiftung, June 2020.
- Crofton Black, “Monitoring being pitched to fight Covid-19 was tested on refugees“, The Bureau of Investigative Journalism, 28 April 2020.
- Gill Bonnett, “How the Five Eyes countries share immigration data“, Radio New Zealand, 30 December 2020.
- Kate Crawford, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale University Press, 2021
- Kate Crawford, “Time to regulate AI that interprets human emotions“, nature (nature.com), 6 April 2021.
- Danish Refugee Council, Global Displacement Forecast 2021, July 2021.
- Deloitte & European Commission, Opportunities and Challenges for the Use of Artificial Intelligence in Border Control, Migration and Security- Volume 2, Addendum, May 2020.
- EU Agency for Fundamental Rights (FRA), #BigData: Discrimination in data-supported decision making, 30 May 2018.
- European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 21 April 2021.
- European Commission, Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence, Press Release, 21 April 2021.
- Future of Today Institute, 2021 Tech Trends Report, Artificial Intelligence, 2021.
- German PREVIEW Project: German Federal Foreign Office, Krisenfrüherkennung, Konfliktanalyse und Strategische Vorausschau, 7 February 2020.
- HIAS, Implementation of Annie™ MOORE at HIAS, Support Letter, 20 February 2020.
- Immigration Policy Lab, GeoMatch—Connecting people to places, website last accessed September 28, 2021.
- Immigration Policy Lab, Harnessing Big Data to Improve Refugee Resettlement, 2018.
- International Organization for Migration, The Future of Migration to Europe: A systematic Review of the Literature on migration Scenarios and Forecasts, 2020.
- Natasha Lomas, “UK commits to redesign visa streaming algorithm after challenge to “racist” tool”, TechCrunch, 4 August 2020.
- Cade Metz, “Using A.I. to Find Bias in A.I.”,The New York Times, June 30 2021.
- Middle East Monitor, “France charges 4 cybersurveillance companies execs with ‘complicity in torture’“, 22 June 2021.
- Migration Data Portal, AI-enabled identification management of the German Federal Office for Migration and Refugees (BAMF), 28 April 2021.
- Migration Data Portal, EASO’s Research Programme: Using big data from global media to predict migration flows, 26 May 2021.
- Mohanad Moetaz, “New tool could help immigrants decide where to live in Canada“, Canada Immigration News, 25 March 2021.
- Petra Molnar, Technology on the Margins: AI and global migration management from a human rights perspective, Cambridge International Law Journal, 2019.
- Petra Molnar and Lex Gill, Bots at The Gate: Automated Decision-Making in Canada’s Immigration and Refugee System, 2018.
- Lucia Nalbandian, Using Machine-Learning to Triage Canada’s Temporary Resident Visa Applications, Ryerson Centre for Immigration and Settlement (RCIS) and the CERC in Migration and Integration, July 2021.
- Ann-Charlotte Nygård, “EU wide availability of personal data of third country nationals for migration and security purposes—the challenge of ensuring fundamental rights safeguards“, Blog Post, Migration Policy Center, last accessed 8 October 2021.
- Hai Nguyen et al., Stability in Matching Markets with Complex Constraints, Management Science, 2021
- Observatory of Public Sector Innovation, Annie™ MOORE (Matching for Outcome Optimization and Refugee Empowerment), 16 September 2020.
- Office of the High Commissioner Human Rights, Artificial intelligence risks to privacy demand urgent action—Bachelet, 15 September 2021.
- Sonja Peteranderl, “Predicting Refugee Movements? There's an App for That”, Spiegel International, 23 September 2020.
- Augusta Pownall, “Annie MOORE algorithm matches refugees to best-suited US cities”, Dezeen, 22 August 2019.
- Max Roser, Tourism, Our World in Data, 2017.
- United Nations Network on Migration, Deep Dive “Migration 4.0” on “Forecasting”, 5 November 2020.
- Universität Hildesheim—Migration Policy Research Group, Match'In—Pilotprojekt zur Verteilung von Schutzsuchenden mit Hilfe eines algorithmengestützten Matching-Verfahrens, May 2021.
- Christopher Wickens et al., “Complacency and Automation Bias in the Use of Imperfect Automation”, The Journal of the Human Factors and Ergonomics Society, 2015.
Program Partners
The German Marshall Fund of the United States
The German Marshall Fund of the United States (GMF) strengthens transatlantic cooperation on regional, national, and global challenges and opportunities in the spirit of the Marshall Plan. GMF contributes research and analysis and convenes leaders on transatlantic issues relevant to policymakers. GMF offers rising leaders opportunities to develop their skills and networks through transatlantic exchange, and supports civil society in the Balkans and Black Sea regions by fostering democratic initiatives, rule of law, and regional cooperation. Founded in 1972 as a non-partisan, nonprofit organization through a gift from Germany as a permanent memorial to Marshall Plan assistance, GMF maintains a strong presence on both sides of the Atlantic. In addition to its headquarters in Washington, DC, GMF has offices in Berlin, Paris, Brussels, Belgrade, Ankara, Bucharest, and Warsaw.
Bertelsmann Stiftung
The Bertelsmann Stiftung is committed to ensuring that everyone in society is given a fair chance to participate. Structured as a private operating foundation, the Bertelsmann Stiftung is politically non-partisan and works independently of Bertelsmann SE & Co. KGaA. The Stiftung acts on the conviction that international cooperation on migration is necessary if we are to adequately address the interests of migrants, destination countries and countries of origin in achieving viable solutions for all stakeholders. The Bertelsmann Stiftung advocates this triple-win approach both within and beyond Germany. Founded in 1977, the Bertelsmann Stiftung has since provided some €1.5 billion for non-profit work.
Robert Bosch Foundation
The Robert Bosch Stiftung GmbH is one of Europe's largest foundations associated with a private company. It works in the areas of health, education, and global issues. The Global Issues support area centers around the topics of peace, inequality, climate change, democracy, migration, and immigration society. With its charitable activities, it contributes to the development of viable solutions to social challenges. For this purpose, the Foundation implements its own projects, enters into alliances with partners, and supports third-party initiatives. Since it was established in 1964, the Robert Bosch Stiftung has invested around 1.9 billion euros in charitable work.
Authors' Note on Research Assistance
We would like to thank all those that took the time to speak to us in interviews, background conversations, and technology training sessions, including experts from academia, government, civil society, and the humanitarian field. A special thanks to Petra Molnar and Fabio Chiusi for their helpful review and feedback on earlier draft versions of this paper.