The anthology Busted! The Truth about the 50 Most Common Internet Myths, drawn up in the run-up to the Internet Governance Forum 2019, clears up misconceptions about the impact and reality of the Internet and explains fact-based, vividly and practically what science knows about Internet-based communication. It will be will presented at the IGF 2019 in Berlin and is available online. AlgorithmWatch executive director Matthias Spielkamp is one of 50 contributors, he busts the myth that ‘Algorithms are always neutral’.
Because an algorithm is nothing but a set of instructions that is applied to data – that usually comes in the form of numbers – it can contain no bias or prejudice that would have an influence on the outcome produced by using the algorithm.
Algorithms are designed by humans; so are algorithmic decisionmaking systems – from network management that favours certain forms of content over others (net neutrality) to “AI” that is supposed to automatically distinguish hate speech, disinformation or terrorist propaganda from journalism, parody and other forms of legitimate content. All of these systems use value judgments to arrive at their results: To perform its task, an algorithm needs to “know” what data package to treat differently from
others, what criteria define a certain mode of expression. Leaving the question aside whether algorithms will ever be able to assess speech correctly (they will not), it is obvious that such definitions are always developed by human beings with certain intentions. We would not qualify these definitions as neutral, and neither is the algorithm that acts on their basis.
A benevolent reading of the myth is that algorithms subject all inputs to the same set of instructions and therefore act indiscriminately, and that they do not have any intentions of their own. This is true, but obviously beside the point.
With regard to so-called machine-learning techniques, another aspect comes into play. When algorithms are trained by feeding them with large data sets in order for them to identify patterns and “learn” from them (so-called “training data”), then it needs to be understood that these data sets usually contain biases that are inherent in human society. If a self-learning system draws conclusions on the basis of biased data, these conclusions will generally be biased, too, and therefore not neutral.
Algorithms are either directly designed by humans or, if self-learning, develop their logic on the basis of human-controlled and -designed processes. They are neither “objective”nor “neutral” but outcomes of human deliberation and power struggles.
Read other myths here: