For over a year, we have been looking into EU-funded border security research projects to assess their methodological approaches and ethical implications. We failed.
Through the Horizon 2020 program and its successor Horizon Europe, the EU funds research and innovation projects. Parts of them are dedicated to the automation of border security. But are EU borders being automated responsibly?
On paper, that's what one would assume: All projects that receive funding from the European Union must “comply with ethical principles and relevant national, Union and international legislation,” writes the EU Commission in their Horizon Europe funding scheme regulation. As all Horizon Europe projects must undergo a complex ethics appraisal scheme and must be documented, getting answers to our research interest seemed feasible.
Until it did not. After more than a year of research and constant interactions with the Commission’s Research Executive Agency (REA), the EU funding body which manages access to information requests concerning such projects (and whose “mission” includes building “inclusive societies” in Europe), we found ourselves tangled up in EU bureaucracy.
The intricate architecture of the Appraisal Scheme defeats the purpose as it makes it really hard for any independent external observer to properly check if and how all the required steps of the ethics appraisal procedure have actually been followed.
Hide and Seek
This contributes to an opacity that impaired our capacity to look into the actual risks deriving from project outputs and the mitigation strategies deployed to tackle them. It becomes impossible to know whether the appraisal procedure asks the right questions to properly make the project outputs “ethical,” and not just compliant with data protection and research integrity rules.
REA and consequentially the EU Commission are driven by the assumption that the gist of actual project assessments should in principle not be disclosed to the public. This applies even to the methodologies and KPIs adopted to evaluate the performance of project outputs. With such restraints, a full, evidence-based analysis cannot be done.
We encountered this obstacle for example when trying to assess the ethics appraisal undergone by the long-completed and extremely controversial ROBORDER project that was to deploy unmanned patrol robots. We requested information about ROBORDER‘s “methodology applied for the evaluation of the system performance.” At first, we were denied it in reference to the “protection of the public interest as regards public security.” The identity and affiliation of individuals involved in the ethics review process would also not be shared, to protect their “privacy and integrity.” REA also cited “commercial interests” and the protection of intellectual property as lawful grounds to refuse disclosure: “releasing this information into public domain would give the competitors of the consortium an unfair advantage, as the competitors would be able to use this sensitive commercial information in their favour.”
What outweighs public interest?
These reasons given to us to avoid disclosure were common reactions to all the requests we sent out. In the end, REA did provide us with information on the methodology. But we also requested the “Draft of concept of operation, use cases and requirements” (deliverable D1.1) from REA as it “summarises the planned concept of operations, initial use cases and user, ethical, legal and security requirements.” This time, we did not get it, not even in redacted form. REA claimed in its written reply that “D1.1 contains sensitive information about border surveillance, in particular the reference to security system requirements, operational aspects of the surveillance system, individual technological components” which “if disclosed, would undermine the protection of public interest as regards the public security.”
As unmanned vehicles to patrol EU borders were developed in ROBORDER, capable of operating in swarms, their capabilities would most likely be of interest to the military as well. This is all the more concerning as the principle of non-disclosure outweighs “public interest.”
We disagreed with REA’s idea that under no circumstances the public should be allowed to be informed about “the methodology applied for the evaluation” of the performance of systems that impact human rights: they should be independently evaluated and scrutinized — by the media, civil society, academia, and the larger public. But how would we do it if it is factually impossible to understand their basic inner workings?
With great power comes great responsibility
More transparency is urgently needed, but it’s not all about secrecy and lack of information. When discussing the ethical framework for maritime surveillance technology projects (such as RANGER and MARISA), researcher Sari Sarlio Siintola and colleagues lamented another problem affecting the appraisal procedure itself: “The ethical guidelines of Horizon 2020 focus heavily on traditional research integrity-related issues and as such offer limited guidance in terms of the ethical and societal sustainability of real time setting piloting and trials, or the final product/solution itself.”
Shallow ethical assessments are incapable of getting to many of these projects’ fundamental moral issues. Those should be publicly discussed in the EU’s democratic societies, and not be suppressed in reference to security concerns.
While the procedure has evolved in Horizon Europe projects, namely towards a greater focus on external audits, the underlying logic remains unchanged. This is why Siintola warns: “When technological advancements lead to an increased surveillance capacity for authorities (…), so do the moral and legal duties to act against ill will and to help those in distress; with great power comes great responsibility.”
Increased surveillance capabilities should mandate more, not less, transparent checks. It’s high time for the REA to tune in to that.
Want to learn more about EU-funded experiments at our borders? Read our long-read article: