What to expect from Europe’s first AI oversight agency

Spain announced the first national agency for the supervision of Artificial Intelligence. In its current shape, the plan is very industry-friendly and leaves little space to civil society.


Spain wants to lead the way in the regulation of Artificial Intelligence in Europe. "We don't want to be witnesses but protagonists of the great digital changes," said Carme Artigas, the Spanish government's secretary of state for digitization and a key figure in these aspirations, in a recent interview. In the coming months, thanks in part to her efforts, the first national agency on the continent created to supervise and control these technologies will open under the acronym AESIA (which stands for Agencia Española de Supervisión de la Inteligencia Artificial).

The AI Act, an upcoming European regulation currently being negotiated, will very likely require member states to designate national authorities to monitor compliance. While most countries have not yet explained how they will do this, Spain has already said that it will create an entity independent from the government, tasked with overseeing both private and public sector algorithms.

AESIA's ability to veto and sanction the use of potentially harmful systems will be closely linked to the final version of the IA Act that is approved in Brussels. The available resources and the internal functioning of the agency are currently under discussion within the Spanish government. But the official documentation and the sources consulted by AlgorithmWatch for this article allow us to sketch out what the agency's main lines will be.

Testing European standards

Last summer, Carme Artigas and Thierry Breton, the European Commissioner for Internal Market and Services, presented a “regulatory sandbox” for Spanish companies to test the new regulation.

The Spanish government's objective with this sandbox is, on the one hand, to identify "best practices" among companies on how to implement the European regulation. These will be included in a guide that Spain should present during its presidency of the European Council in the second half of 2023. On the other hand, the sandbox shall help define how AESIA will work and what its relationship with Spanish operators of Artificial Intelligence will be. This is what emerges from a public tender published in December and from the explanations offered by the government to AlgorithmWatch.

"It turns regulatory logic on its head. Normally you would develop and write the law and look at the consequences. And based on that you could make the relevant legal modifications. In this case, it is the other way around: companies with more advanced AI systems participate and they see use cases and where discrimination can occur. Based on that, we will try to minimize the biases of the systems," says Carlos Ruiz de Toledo, legal advisor in Artigas' office, in a video call.

In recent months, an external provider to the government – consisting of the consultancy firm Deloitte and the association OdiseIA – drafted several guides and manuals to explain in a practical way to companies how to implement the requirements of the regulation. These guides include compliance self-assessments and checklists and focus on the use cases considered high-risk by the AI Act.

Between March and April 2023 (five months later than initially planned) the sandbox will be open to companies that want to participate on a voluntary basis, Miguel Valle del Olmo, deputy director general for IA at the ministry for economic affairs and digital transformation, explains by email.

National seal and the automation of monitoring

The public tender published in December details the government's plans for the agency's first steps. The company awarded the contract (it is divided into two lots) will, among other functions, have to update and draft the final sandbox guidelines but also "define the internal processes" for the agency's operation and the professional profiles necessary for its implementation.

Among these plans is the creation of a “national AI seal”. In other words, a certificate that accredits that the systems deployed in the country meet the requirements demanded by Europe. This seal, however, will be voluntary, at least until the European regulation comes into force. The government is not clear about how it will encourage companies to undergo this recognition, but they do hint at the possibility of linking access to European funding for companies to this seal of quality.

Artigas, the secretary of state, also plans to automate as much of the agency's monitoring work as possible. To this end, a web-based tool has been commissioned to be developed that will be accessible to both agency staff and companies using high-risk AI systems.

Companies will be able to self-assess whether their systems comply with the regulation, as well as monitor their compliance once they have been deployed in the market. The tool, according to the official documentation consulted, will include an analysis of the code and datasets of the systems.

Within the government's AI advisory board, the partial automation of algorithm audits has been discussed internally in recent months, according to Amparo Alonso Betanzos, a member of this board and an AI researcher. She, like others, recognizes the limitations of an automated analysis to measure the potential harmful impact of an algorithm. "Automating certain things is quite complex. There will be parts that can be measured automatically: the data that is collected, the variables used, etc. But for example, the interface with humans is more complicated," she says.

Agency resources

Like any newly created public agency, AESIA needs sufficient financial and human resources to start operating. Its budget allocation is precisely the main bone of contention between Carme Artigas' department and other Spanish government ministries.

Initially, it was reported that the agency would have 5 million euros at its disposal. Ruiz de Toledo assures that their intention is to increase this budget before it is launched, but he clarifies that this will depend on approval from the Ministry of Finance. He points out that one of the options for increasing the resources is that the agency's future economic sanctions against companies or administrations will go into its own budget.

A big part of the money finally designated will go to pay the salaries of the agency's staff. In the autumn, the government estimated a staff of 40 workers, all of them state officials with technical, administrative, and legal profiles. The absence of other profiles has raised suspicions among researchers and algorithmic justice advocates.

"Without humanist or social profiles on this staff, how are you going to observe the social impact of AI systems?" asks Judith Membrives, a digital rights activist. Ruiz De Toledo does not rule out the possibility of opening up the staff to external professionals or collaborators in the coming months, although he acknowledges the difficulties in justifying their inclusion in a public body such as this.

AESIA's financial and human resources, which are key to defining its real supervisory capacity, will be included in internal statutes that should be ready before the summer. Artigas' team's most optimistic forecast is to have all the documentation in place by June and give the government time to process the relevant laws in the summer. Thus, if nothing goes wrong, the agency would be up and running next autumn.

Officially, the government has so far only announced that the agency's physical headquarters will be in the Galician city of A Coruña (pictured above), after a controversial competition between several candidates.

Social impact of AI

Whether upcoming European regulations and national supervisory agencies can detect (and prevent) the negative social impact of AI systems remains to be seen.

As early as 2020, the Spanish government included in its national AI strategy the creation of a public observatory of algorithmic social impact. More than two years later, nothing is known about this entity. Now, under the umbrella of the new European regulations, Madrid is proposing a theoretically more ambitious plan to protect citizens from the harmful effects of these technologies.

In the published tenders, the government includes the creation of a "national plan for the protection of vulnerable AI groups" with several actions. These include a study of the implications and risks of AI by groups and areas and the drafting of best practice manuals for AI systems in public administrations - which should be applied to algorithms that have been in use in Spain for years, such as RisCanvi, Veripol, and Viogen.

It also specifies the creation of a "risk observation laboratory" that will carry out "informal audits" on systems considered of high risk. This work plan will be carried out by external companies and the Artigas team, says Miguel Valle del Olmo. And its results will be handed over to AESIA once the agency starts working.

Civil society’s role

These plans on paper contrast with the role that the Spanish government is so far leaving to civil society in the design and construction of the agency. Spain saw a proliferation of entities defending human rights in the digital and AI sphere in recent years.

This is the case of Fundación Civio, which has taken the government to court over software used for the allocation of a subsidy; the Algorights collective; and the Observatorio de Trabajo, Algoritmo y Sociedad (TAS), which defends the rights of platform workers.

These and more than fifty other social organizations demanded in a public letter to the Government last September to participate in the design and strategy of the new supervisory agency. Weeks later, Carme Artigas and her team received several representatives to listen to their requests.

Several participants in this meeting point out that, beyond fine words, Artigas dismissed their inclusion in the agency's design process. In fact, four months after this meeting, the government has not contacted them again, AlgorithmWatch can confirm.

Albert Sabater, researcher and director of the Observatory of Ethics in AI in Catalonia (OEIAC), calls for the need to increase the weight of civil society in this type of process. "Citizens are not only going to take a stand when there is something wrong [with AI]. They want to be heard so that these systems are designed in such a way that, in addition to the most mechanical part, their impact on society is also taken into account". "We are witnessing a disruptive process in which many more actors must be involved, of that there is no doubt, otherwise it will be too late," he adds.

Another striking element of this project is how the design and development of several of the future agency's plans and actions are being tendered out externally. David Cabo, co-director of the Civio Foundation, speculates that the government is rushing to meet the deadlines and is trying to get the agency up and running this year.

"The design of the agency should be in public hands. If you want the agency to accumulate knowledge to keep an eye on others, if you start tendering instead of hiring civil servants, that knowledge is not going to stay within the agency," he warns.

Edited on 2 February to correct Amparo Alonso Betanzos' name.

Did you like this story?

Every two weeks, our newsletter Automated Society delves into the unreported ways automated systems affect society and the world around you. Subscribe now to receive the next issue in your inbox!

Get the briefing on how automated systems impact real people, in Europe and beyond, every two weeks, for free.

For more detailed information, please refer to our privacy policy.