How to Rethink Tech in Migration

AlgorithmWatch organized a Workshop to discuss how to fundamentally challenge — and replace — the current, untenable status quo concerning the use of AI and automation to manage and control human mobility, both at EU level and beyond. This is what we found.

5 November 2024

On Wednesday, 8th May, AlgorithmWatch and the AFAR (‘Algorithmic Fairness for Asylum Seekers and Refugees’) project hosted a group of experts on digital technologies and migration control, including academics, journalists, and representatives of civil society organizations and policy institutes. During an intensive day of cross-disciplinary discussions at the Hertie School in Berlin, these experts set out to identify key issues underpinning contemporary discourses and practices of digital migration control. By speaking across disciplines and fields of action, they also sought to identify future multidisciplinary strategies for pursuing political change.

A crucial overarching theme of the workshop was that disciplinary and professional boundaries sometimes unnecessarily divide and obscure our joint efforts to pursue more humane, fairer, and human rights-oriented border control practices. Academics, journalists, civil society organizations, and policy institutes often work on very similar issues, yet they speak to different audiences, work according to different schedules and funding constraints, and produce different kinds of outputs. Workshop participants shared a strong feeling that the most productive avenues for pursuing political change will come from multidisciplinary collaborations.

This collaborative insight is an important one, particularly in the politically fraught and polarized field of migration control. Critical researchers, journalists, and professionals working in this field risk feeling disillusioned by popular political sentiments and depressing news regarding the exclusionary, discriminatory, and violent nature of border control practices that stand in stark contrast to proclaimed values of justice, equality, and fairness. Yet, we are not alone. By working together across our specialisms, disciplines, and professions, we can work to reframe digital and algorithmic border control practices away from the solutionist and discriminatory status quo toward more equitable and humane alternatives.

The workshop focused on various levels of analysis. Discussions ranged from concrete political and socio-material developments relating to the ongoing digitalization of border control, all the way to broader abstract ideas about potential elements of radical, systemic change of the digital migration control agenda. This summary report outlines the key takeaways from the workshop discussions.

Dominant narratives and practices of digital migration control technologies

The digitalization of border control is comprised of a highly complex, multidimensional set of ideas, practices, and technical tools. During the workshop, participants considered how digital technologies and algorithms in migration control are underpinned by a broader ideology of technological ‘solutionism’ and a ‘digital revolution’, which present new digital tools as desirable, reliable, and inevitable. These ideological beliefs are reflected in the deployment of more specific ideas of the necessity of ‘modernizing’ border control, and the technological assumptions regarding the calculability of the ‘riskiness’ of people on the move.

Yet, as workshop participants pointed out, technological developments are always underpinned by political power relations, which result in unequal benefits from new technologies. In the field of digital migration control, promises of increased mobility and ‘frictionless travel’ for some will come at the cost of expansive surveillance and exclusion of others. Drawing on contemporary critical ideas about ‘decomputerization’ as a strategy for resisting the unconstrained expansion of exploitative surveillance capitalism, in the field of digital migration control we should critically question contemporary narratives about the desirability and inevitability of digitalization and the expansion of exclusionary border controls. Who do these systems benefit? Regardless of their (in)accuracy and (un)reliability, what kinds of ideas and assumptions—for instance, of migrants as necessarily related to security risks and criminality—do these systems perpetuate?

In response to such questions, workshop participants agreed on the dire need to question not only the broader value basis underpinning new digital migration control technologies, but also the practicalities of limited accountability and democratic decision-making when political decisions regarding migration control are delegated to black-boxed algorithmic tools.

One way to unpack these political dynamics is to critically investigate the causes and effects of new algorithmic border control tools. Who created these tools, and who asked for them? What political, social, and technical logics underpinned their development? What prior systems were in place, and what new technological affordances are created by new systems? For instance, the phenomenon of ‘function creep’—whereby migration databases are initially created for very specific use-cases but later expanded to a wide variety of law enforcement and security purposes—is well documented and should be made more transparent.

Regarding their effects, algorithms in border control contexts can have adverse practical effects for individuals on the move through profiling, creating suspicion, and exclusion. Yet, these tools also have symbolic, political, and legal effects, which need to be critically examined and critiqued. Due to their links to broader ideas about tech solutionism and modernization, the introduction of algorithmic border control tools can allow states to appear progressive even when new tools have discriminatory effects, and when these tools do not work as intended. Rolling out new digital technologies that promise to exclude ‘undesirable’ populations based on ‘objective’ calculations can be used to gain political support even when such algorithmic decisions are far from objective or fair. Legally, rights to privacy, non-discrimination, and the explainability of algorithmic decisions risk being undermined by autonomous and opaque digital border control tools.

Gaps between academia, civil society, policymakers, and the public

Workshop participants identified some remaining obstacles to collaboration across professional disciplines, while also highlighting that a lack of knowledge or information exchange is not always the primary obstacle to enacting progressive change when it comes to border control algorithms.

Differences between modalities of knowledge creation, timescales for collecting and analyzing evidence, and differential access to financial and other resources sometimes result in different kinds of outputs and a lack of productive engagement between fields. Academic research, for instance, is often aimed at publication primarily in academic journals, with only limited resources available to disseminate findings more broadly. Nonetheless, the theoretical depth of academic research, and access to key stakeholders and evidence through structured interviews and document analysis methods provide unique insights into border digitalization. Civil society organizations are often working to tighter deadlines and with funding and output targets tied to the requests of specific funders. Yet, they can also be uniquely placed to influence political processes by being included in policy discussions directly. Journalists also often work to tight deadlines with a view of speaking to broad popular audiences, which can limit the depth of engagement and their capacity to engage with particular issue areas in a sustained matter in the long run. Nonetheless, they are often at the frontlines, documenting new developments on the ground by collecting crucially important evidence of the immediate impacts of digital border controls.

Yet, these differences between fields also point to productive avenues for future collaboration. Academic research can provide theoretical and analytical bases for the policy recommendations of civil society actors and journalists’ investigative research practices. Simultaneously, insights from the policymaking and public engagement practices of civil society actors, and journalists’ direct engagement with ongoing political processes can inform research pathways for academic research.

Workshop participants highlighted several recent and ongoing efforts to construct broader political coalitions to bridge the divides between these different professional disciplines. Coalitions have been formed both in response to particular policy initiatives—such as deliberations regarding the EU AI Act and technical developments such as the expansion of automated facial recognition and risk assessment technologies—and more broadly to investigate ongoing developments relating to the digitalization and datafication of borders across Europe and beyond.

One key insight from the experiences of such coalitions has been that, often, the primary obstacle to political change is not a lack of information or evidence regarding the harms of algorithmic border controls. Policymakers and the general public can be confronted with evidence of these harms, yet still support digital border control technologies and algorithmic decision-making at the border. In such instances, enacting change requires a broader public effort to challenge the value-based assumptions regarding the desirability or equitability of digitalization more broadly, as well as direct engagement with the politics of migration control on a more general level. A key obstacle to such broader engagement is the polarization and mediatization of migration as a highly loaded political issue-area, which can preclude deeper discussions of the underpinning values guiding the digitalization of border controls.

In this context, workshop participants argued that the creation of new knowledge regarding algorithmic border controls—or evidence regarding the accuracy, reliability, or fairness of these tools—is not always enough to produce a reframing of border digitalization. The power of solutionist and modernization narratives can override technical, social, and humanitarian concerns through promises of solving these problems in the future through an even further intensification of algorithmic decision-making. The complexity of these systems only acts as a further obstacle to productive critique and political challenges.

Radically reframing algorithmic migration control

In this context regarding the interplay between producing new knowledge regarding digital borders while also attempting to reframe algorithmic border controls more broadly, workshop participants engaged in a blue-sky discussion to critically consider what needs to change for digital and algorithmic borders to become fairer and more equitable in the future.

A key theme in discussions of digital innovation in this workshop was the influence of private sector technology companies in shaping processes of digitalization at the border. Public authorities now increasingly identify the private sector as a key site of technological ‘innovation’, to which they defer decision-making regarding the specific technical structure of new border control technologies.

Academics, civil society actors, and journalists have all repeatedly shown that supposedly technical organizations and politically neutral private technology companies in fact enact particular social and political assumptions through the development of particular kinds of digital and algorithmic border control technologies. Technologies are never neutral, but always benefit some over others and are designed to solve particular kinds of social problems. Since processes of technological development are guided by these implicit value-based assumptions, this workshop’s participants agreed on the dire need to reduce the influence of private sector technology companies in shaping the public border control agenda. Moreover, processes of technological development and regulation should be made more transparent and democratic by opening them up to public scrutiny. So too should public technical organizations such as eu-LISA, the International Civil Aviation Organization, the International Organization for Migration, and others be subjected to closer public scrutiny, recognizing that the technical standards and tools these organizations produce have highly political effects on the ground.

Since private companies and technology regulators play in increasingly crucial role in shaping border control ‘solutions’ and narratives about digital technology, another key gap remains between critically oriented academic researchers, civil society actors, and journalists on one hand, and the producers of new digital technologies on the other hand. Constructive political change in the field of digital and algorithmic border controls will require thinking about potential avenues for the ethical ‘co-production’ of new border technologies. Such practices must of course be carefully considered in order to avoid the co-optation of critical actors into repressive or exclusionary political decisions. Nonetheless, leaving technical actors to develop tools without independent critical input from academia and civil society—and only critiquing these tools once they have already been deployed—will only produce limited results in terms of reshaping digital borders.

Workshop participants also agreed on the need to de-securitize and demilitarize the border control field. New digital border control tools and algorithmic solutions are often developed with the assumption that migration control and border management are necessarily, and primarily, issues of national security and crime control. The economic, social, and humanitarian dimensions of human mobility are overshadowed by the crime control and security-oriented agenda of border digitalization.

This political orientation of new digital and algorithmic border control tools is not limited to any particular technological solutions, but characterizes contemporary technological imaginaries of current and future border controls more broadly. Contemporary processes of digitalization are as much about the capacity of current tools as they are about imagining technological futures. Claims about the future—though often presented as evidence-based, relatively certain predictions—are in fact contestable and political claims about the desirability of particular kinds of technological ‘solutions’ and political practices. These algorithmic imaginaries not only popularize and normalize certain technologies, but also marginalize and discredit alternatives visions of fairer and more equitable migration policies. In order to bring about progressive political change, we need to not only shape discussions about the current realities of migration, algorithmic surveillance, and digital exclusion, but also imagine new alternative futures for digital borders.