From evidence to impact: Stiftung Mercator supports AlgorithmWatch
Stiftung Mercator will support AlgorithmWatch over the next three years to strengthen our work seeking to influence digital policies and algorithmic governance in the public interest.
Germany’s civil society has much potential for improvement when it comes to impacting the process of digitalization. Considering the social importance of this cross-sectoral issue for our future, civil society organizations committed to digital rights aren’t nearly as influential as they should be. The environmental and ecological movements – with organizations like WWF, Greenpeace, NABU, or BUND – are a paragon of civil society’s achievements. But every single one of these organizations has financial resources at their disposal that probably exceed the combined budgets of all political initiatives promoting digital rights in Germany.
This is why digital rights organizations should be supported in a way that empowers them to act on a wider scale and broaden their political and social influence. Only by increasing their presence in the public consciousness will such organizations be able to establish a sustainable financial structure for themselves. The greatest beneficiary of this support will be democracy itself; social cohesion, inclusive debates, and our ability to act in the public interest can only be maintained if civil society remains strong and its representatives are trustworthy.
With Stiftung Mercator, AlgorithmWatch has found a new partner to pursue a common goal. Stiftung Mercator will support our policy and advocacy work and thereby help us to build and consolidate a sustainable organization.
Artificial intelligence and automation
Automated decision-making systems (ADMS) based on artificial intelligence affect the way our society works. Their already huge influence on our everyday lives is becoming ever-more pervasive. On Facebook, YouTube, TikTok, and Instagram, algorithms opaquely determine what content is shown or blocked. Other ADMS systems determine who can obtain a loan or benefit from state support, or in which neighborhoods police forces should be mobilized. This contributes to a concentration of power in the hands of a few organizations and state actors that develop these systems. The social balance will remain in danger if these actors aren’t made accountable and fail to fulfill their responsibility to the public.
Algorithmic systems increasingly make up important elements of our social infrastructure through technologies like biometrics or vocal recognition, which threatens to make their power even more problematic. The underlying data, models, and technologies being developed and controlled by private companies implementing these algorithms are a means of leverage. The democratic public is becoming more and more dependent on privately created infrastructures which weakens or even endangers social sovereignty as well as people’s individual autonomy. These systems regularly have negative social effects that are beyond democratic control, like the restriction of labor rights (for example at Uber, Lieferando, or Amazon), gentrification (a phenomenon influenced by AirBnB), or technological vendor lock-in of public services (e.g. IBM, Microsoft, Palantir).
However, the development of AI-based systems also provides an opportunity to recognize and fight against social injustice, sexism, racism, and every other form of discrimination more efficiently. If deployed adequately, algorithmic systems can facilitate access to public services and improve their quality, expand information access, and create equal opportunities for people and social groups. This is what AlgorithmWatch strives for.
Fostering autonomy and fundamental rights for the common good
The application of algorithmic systems has to ensure that these systems increase social autonomy and fundamental rights to the benefit of the common good. We must establish a suitable framework and governance guaranteeing democratic control over algorithmic systems. AlgorithmWatch applies this comprehensive governance concept in the following ways:
- We strengthen individual autonomy in dealing with ADMS by providing information and explanations. We help persons and parties affected by ADMS by providing advice in regards to available legal remedies.
- We promote transparency and contribute actively to a fact-based social debate revolving around ADMS.
- We develop testing instruments to evaluate if fundamental democratic principles are breached by ADMS.
- We develop specific suggestions for public administration and organizations and demonstrate how such systems are applied responsibly.
- We make detailed proposals for national, European, and international legislature on what must be done to regulate these kinds of systems.
By putting our ideals into action, we increase the social value of algorithmic systems and contribute to a more inclusive and equitable society.