Public authorities and private companies – often in a partnership – are fond of using “AI”-based systems to solve tasks that traditionally humans took care of. However, a mathematical formula cannot solve complex societal issues, and not everything can be distilled into an algorithm. It can also happen that the algorithm is inherently biased and/or targets certain groups of people in a distorted, discriminatory way.
There are certain domains – such as migration – which are more prone to human rights violation due to the power imbalance between people affected and those using AI and due to the lack of transparency and accountability for the deployment of these systems. People on the move are often already discriminated against and the use of AI in this context – such as for border surveillance – enhances discriminatory patterns. Also, typically, they have less means and face practical barriers to defend themselves. With respect to EU values and principles, it is an imperative for the EU to protect fundamental rights of all people – also of those who are outside the EU.