No Access to Border Tech: More Transparency in EU Research on Border Surveillance Technology needed
EU-funded research in AI-driven border surveillance is largely opaque, limiting public scrutiny and posing risks to human rights. Despite being publicly funded, many research outcomes and methodologies remain inaccessible, hindering the public’s right to understand how these technologies impact border management and migration.
Our research demonstrates the ineffectiveness of existing transparency mechanisms. Over a period of 18 months, we scrutinized a range of critical border-related research projects. Our finding: these EU projects and their results are hidden from any meaningful public access. The EU's Research Executive Agency (REA) frequently restricts access to project information, citing security and commercial interests over the public good. Redactions and arbitrary limitations on document access mean that critical data on high-stakes projects – such as those involving automated risk assessments, biometrics, swarms of unmanned vehicles and drones and other large-scale surveillance systems – are kept from public scrutiny. Yet, public interest research projects that are funded by public money should be accessible to the public.
Private companies in defense and security play an outsized role in shaping border technologies, not least because they receive a large part of the EU’s research funding. Some projects even list military bodies as potential end users, contradicting EU requirements for civilian-only applications and raising red flags about the dual-use potential of these technologies.
The EU's ethics assessment system appears insufficient to address human rights concerns, as only one project proposal has ever been rejected on ethical grounds. Most projects clear this process without robust examination of their potential for harm for human rights.
Even the European Border and coast Guard Agency Frontex acknowledges the “limited utility” of technology for border security. There is little evidence for the practical added value of such automated solutions, yet the EU continues to fund it substantially: Between 2014 and 2022, the EU has allocated more than €250 million to 49 projects seeking to research and develop border technologies. The EU’s overall spending on border security almost doubled in size in 2021-2027 compared to the previous budgetary cycle.
End unchecked AI uses in migration. The implementation of the AI Act provides an opportunity to establish meaningful transparency standards for high-risk AI deployments in border and migration contexts. Establish effective oversight and enforcement structures at EU and national levels, involving civil society and affected individuals in shaping high-risk requirements. Systematically disclose the contribution of research projects to uses in practice.
Centering affected voices in AI migration research. Security actors such as Frontex and the security and defense industry have a disproportionate influence over the EU’s research agenda. The EU should elevate the voices of affected people on the move, civil society, independent academics, and journalists in the publicly funded research and development of AI systems. Make these groups formal stakeholders in EU-funded research to guide decisions on deploying automated systems at borders or in asylum and immigration. Any new research funding in this area should be tied to introducing such transparency, co-design and oversight mechanisms.
Clearly separate “civilian” and “military” applications. The assessment behind the decision to qualify research project outputs as having an “exclusively civilian application” should be systematically and transparently be made accessible to the public. Clarify criteria to ensure transparency of civilian and military potential of EU-funded projects, so the civil society can follow the decision making.
Establish rigorous and transparent ethics assessments for research projects. All information concerning the operationalization of the seven key ethical requirements developed by the EU’s High-Level Expert Group on AI should be proactively made available to the public. This transparency obligation should concern official documents produced both in the self-assessment (e.g. in the “ethics issues checklist” on artificial intelligence) and by reviewers. Grant public access to project documents on risks and corresponding mitigation strategies. Disclose this information for all automated systems in border security, regardless of risk level.
Guarantee freedom of information. The public must have meaningful access to information about the AI systems being researched and developed to monitor borders and manage mobility. We need balancing procedures for freedom of information requests that compel EU authorities, law enforcement agencies, and AI developers to weigh national security interests against the public’s right to know and overall public interest. In general, public interest should be prioritized over commercial interests in freedom of information requests. This means that the legal threshold for “overriding public interest” should be significantly lowered.
Our Research: Case Studies on EU Projects
The Automation on the Move project examined more than 20 EU-funded research projects in areas such as border surveillance and migration management. These projects developed technology such as pre-screening algorithmic tools, situational awareness and risk assessment systems, biometric identification systems, emotion recognition systems and other AI-driven surveillance tools like drones and automated surveillance towers.
Two EU funding schemes were the focus of our investigation: Horizon 2020 (H2020) and Horizon Europe (HE). We examined a selection of EU-funded research projects in detail to understand what they promised to deliver and what their human rights implications could be.
These projects are no speculative activities. Just like their predecessors funded under Framework Programme 7 (FP7), which informed the creation of the European border surveillance system EUROSUR, they are intended and designed to flow into actual deployment. We documented several instances of operational use: surveillance outputs from the ANDROMEDA project were actually deployed in Greece, while components from projects such as EFFECTOR and D4FLY ended up augmenting real life border security tools.
Our Case Evidence: Patterns of Obstruction
Investigating the technology used at EU borders revealed structural barriers to transparency. Access to information requests we submitted to the EU’s Research Executive Agency (REA) were often delayed or partially fulfilled. Requests for documents were frequently met with arbitrary limitations, such as being required to narrow our requests to a maximum of 10 documents per project. Even that threshold can however be lowered, we found, as the REA enjoys a “practice” to unilaterally bundle separate requests concerning multiple projects together and treat them as one single request. This approach further limited the number of deliverables that can actually be obtained per project (in our case, to just three instead of ten).
In some instances, project documents were heavily redacted, rendering them almost useless for public scrutiny. Several projects also failed to produce required public deliverables, raising questions about a lack of visibility and actual disclosure of publicly funded research results. For example, the NESTOR project’s Grant Agreement was disclosed with 170 redacted pages, while public deliverables for projects like MELCHIOR and ODYSSEUS are almost entirely absent, despite their timelines indicating otherwise. Additionally, we encountered project websites that suddenly went offline or were replaced with random sites, further hindering access to key information.
Ethical Screenings fail to mitigate harmful outcomes for human rights
All H2020 and HE-funded projects must undergo an ethics assessment to ensure compliance with ethical principles and legislation. Our investigation reveals that this ethics appraisal process is insufficient to address the profound ethical, social, and human consequences of AI systems used for border surveillance and mobility management. Ethics appraisals are problematic in several ways:
Are the ethic reviewers independent? The lack of public access to information prevents a thorough scrutiny of the whole ethics appraisal scheme. In particular, it does not allow to verify the independence of the assigned ethics reviewers.
Do harm reduction measures work? The opacity around the research projects prevents any observer from understanding whether the mitigation measures to counter ethical risks can actually avert specific harms. Even in the limited cases in which the projects produce a publicly available analysis of the ethical and societal risks, mitigation strategies are only briefly and vaguely sketched.
Ethical assessments are applied inconsistently across projects. From all Horizon 2020 and Horizon Europe project proposal in the area of border security only one was rejected on ethical grounds. This project, called EMERALD, was deemed to have potential military applications. In contrast, other projects that explicitly mention “military bodies” among their target groups for future uses were approved for funding. This inconsistency suggest that the EU’s ethical assessment scheme is not applied rigorously.
Ethics assessments remain superficial. Many of the projects we analyzed appear to comply only superficially with ethics requirements, limiting it to compliance with basic data protection laws and research integrity, while more fundamental ethical issues around fairness and bias are rarely addressed.
Our findings confirm long-standing research on EU-funded research projects on border security and the management of human mobility. Criticism of these initiatives date back to 2011, when Statewatch called for a fundamental reassessment of FP7-funded projects. Already then, the researchers argued that these technologies raise broader issues beyond privacy and data protection, and require more accountability and public control. More recently, a 2019 paper that analyzes the ethical assessment undergone by the H2020 MARISA and RANGER projects concluded that while “ethical compliance has long been near synonymous with proper research ethics, other important dimensions have been left with more or less an anecdotal status.”
Conclusion: Automated Borders on a Collision Course with Democratic Principles
The EU continues to rely heavily on automation and AI as a potential way to manage mobility and surveil borders. Many of the EU-funded AI tools we looked at do not work as intended. If they work, they do not center people on the move and their human rights. The utility of the tools and applications lacks a proper scientific evidence base, and with the opaque approach the EU risks conflicting with humanitarian concerns, democratic principles and human rights.
This path is a dead end. Publicly funded R&D must prioritize full transparency, human rights, and a careful assessment of whether automation is appropriate for managing human mobility.