Algorithms make no value judgements – except the ones designed by humans

Koning-Karel-II-van-Engeland-960px

"Judgments, when being made by humans, can never be considered value-neutral." Says Danny Yadron, technology reporter for Guardian US in his article Facebook controversy shows journalists are more complicated than algorithms. True enough. The problem with his article: The underlying assumption that in contrast to human curation, algorithmic curation is value neutral. It is not.

Interestingly enough, Yadron himself acknowledges this implicitly in his article when he writes:

Snapchat has seemed to acknowledge this with the hiring of experienced journalists that get interviews with major newsmakers, including a recent one with vice-president Joe Biden. Apple has tried to split the difference by having users select what types of stories they’re interested in. Its editors then appear to help shape which stories users should see.

So humans help shape which stories users should see. They don't pick the stories users should see, they shape it. What exactly do they shape? The system that picks the stories. It's the outcome of a process of biased humans - software developers - designing automated data collection mechanisms and algorithms that are capable of learning from new inputs, aided by humans - users and/or editors - providing those inputs. And, magically, according to Yadron, the outcome is a "neutral algorithm". No, it is not. It cannot be.

Is that a problem? Not necessarily. Whether it is a problem or not depends on many factors: What kind of bias is it? What assumptions do the users have? How open is the provider of news about the bias built into the system? These and probably many other questions are to be discussed to determine whether there actually is a problem.

What certainly is a problem: News stories that call these systems "neutral" and assume they do not include value judgements.