AlgorithmWatch is a non-profit research and advocacy organization committed to evaluating and shedding light on algorithmic decision-making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.
AlgorithmWatch was founded by Lorena Jaume-Palasí, Lorenz Matzat, Matthias Spielkamp and Katharina Anna Zweig. The organisation (gGmbH) is run by Lorenz Matzat and executive director Matthias Spielkamp.
The more technology develops, the more complex it becomes. AlgorithmWatch believes that complexity must not mean incomprehensibility (read our ADM manifesto).
AlgorithmWatch is a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically.
HOW DO WE WORK?
AlgorithmWatch analyses the effects of algorithmic decision making processes on human behaviour and points out ethical conflicts.
AlgorithmWatch explains the characteristics and effects of complex algorithmic decision making processes to a general public.
AlgorithmWatch is a platform linking experts from different cultures and disciplines focused on the study of algorithmic decision making processes and their social impact.
In order to maximise the benefits of algorithmic decision making processes for society, AlgorithmWatch assists in developing ideas and strategies to achieve intelligibility of these processes – with a mix of technologies, regulation, and suitable oversight institutions.
The ADM Manifesto
Algorithmic decision making (ADM) is a fact of life today; it will be a much bigger fact of life tomorrow. It carries enormous dangers; it holds enormous promise. The fact that most ADM procedures are black boxes to the people affected by them is not a law of nature. It must end.
- ADM is never neutral.
- The creator of ADM is responsible for its results. ADM is created not only by its designer.
- ADM has to be intelligible in order to be held accountable to democratic control.
- Democratic societies have the duty to achieve intelligibility of ADM with a mix of technologies, regulation, and suitable oversight institutions.
- We have to decide how much of our freedom we allow ADM to preempt.
We call the following process algorithmic decision making (ADM):
- design procedures to gather data,
- gather data,
- design algorithms to
- analyse the data,
- interpret the results of this analysis based on a human-defined interpretation model,
- and to act automatically based on the interpretation as determined in a human-defined decision making model.