AccessNow maps regulatory proposals for Artificial Intelligence in Europe – our take

AlgorithmWatch welcomes the report by AccessNow on mapping regulatory proposals for Artificial Intelligence in Europe. It provides a good starting point for the ongoing work that will need to be undertaken to develop robust regulatory frameworks in this field.

Position

19 November 2018

#publicsector

AlgorithmWatch welcomes the report by AccessNow on mapping regulatory proposals for Artificial Intelligence in Europe. It provides a good starting point for the ongoing work that will need to be undertaken to develop robust regulatory frameworks in this field.

Full report (PDF)

It is an important finding of the report that many countries are still in the nascent stages of developing strategies, left alone frameworks, and that some countries appear not to have, as of yet, a national strategy at all. The gap analysis of existing approaches in relation to human rights underlines the importance of keeping a close eye on future developments in this field.

The earlier we get an understanding of what approaches are developing, the easier it will be to identify areas where regulation will converge and diverge, reflecting national priorities, existing legal frameworks, and societal discussions, including on ethics and morals. This in turn will then allow an analysis of where we can maximise the technology’s positive impact and mitigate negative outcomes, and reduce the occurrence of unintended consequences. Lawmakers and regulators can learn from such work to help them shape and define their own approaches on an ongoing basis.

On a more critical note, the report does not take into account relevant discussions and regulatory approaches already in place because it uses Artificial Intelligence that currently frames governmental discussions. However, the term is ill-suited to circumscribe the challenges we face as societies. AI is too broad and fuzzy a concept to form the basis for discussions on regulatory approaches. It combines aspects that should be kept apart – automated cars, high frequency trading, medical diagnosis, search engines and citizen scoring all include “AI” but it makes little sense to treat them all alike because of this.

Instead, our concern should be focused on systems that either make automatic decisions or that support decision-making. Some of the best known examples for this, like the COMPAS risk assessment system, would not be considered AI by most accounts. On the other hand, many AI based applications, like predictive maintenance systems, are less important, as they pose no threat to civil liberties, justice or equality. What we need to be concerned about are systems automating decision-making or decision support.

As a result we do see great value in AccessNow’s “AI specific human rights assessment”. We will only be able to identify both risks and ways to mitigate them if we look at specific sectors individually and how we can protect societies’ cohesion. The GDPR will not be of help in addressing the risks associated with predictive policing as long as the systems are not using personal data.  At the same time, EU employment equality law, applied creatively and with determination, could be used to address discrimination of algorithmically controlled platform workers.

AlgorithmWatch is currently working on a report that provides further detail on the German approach, a further step towards filling in the blanks. In addition, we are undertaking a mapping exercise of the use of automated decision-making systems in certain sectors of society, which will further improve understanding of the use of automated decision-making and decision-making support.

We welcome future discussion and collaboration on these issues to pool knowledge and resources in this area, as the complexity of the challenges demands a multi-pronged approach.

Read more on our policy & advocacy work on ADM in the public sector.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.