Want create site? Find Free WordPress Themes and plugins.

In an almost 5,000 word “blog post”, Zuckerberg (plus we assume two dozen or so of the company’s public policy hacks and lawyers) has laid out Facebook’s idea of how to deal with the crisis the company is facing. The article’s titled “A Blueprint for Content Governance and Enforcement” and structured in 9 parts:

  1. Community Standards
  2. Proactively Identifying Harmful Content
  3. Discouraging Borderline Content
  4. Giving People Control and Allowing More Content
  5. Addressing Algorithmic Bias
  6. Building an Appeals Process
  7. Independent Governance and Oversight
  8. Creating Transparency and Enabling Research
  9. Working Together on Regulation

It’s too early and just too much to provide a full assessment of what Zuckerberg suggests in it, so I’ll focus on some of the aspects and offer some thoughts. This is all tentative but we have to start somewhere – so in case you think I’m wrong please say so without shouting. We may even end up having a discussion and learning from each other.

On first reading, the text can be understood as a sharp turn in the company’s thinking on content governance, a matter at the core of a network with more than two billion users. Then again, it can be seen as another one of the company’s numerous (more or less) skillfully conceived red herrings.

Facebook’s argument in a nutshell

In a nutshell, the argument goes like this:

Since there’s a lot of content on Facebook that some people (and governments) have a problem with, we’ll decrease the amount of this content being seen, using “AI” either to block the content completely or reduce its visibility. In order not to be seen as censoring the platform, we make sure our AI does the right thing, give people choice about how to calibrate their feed and – for the very, very rare instances our process screws up – an institute an appeals mechanism with an independent expert body to decide on the cases we don’t want to decide ourselves. To round this off nicely, for the cases when self-regulation is not enough we will –drumroll – work with governments on regulation (offering nothing but metrics about doing something we want to do anyway).

Everyone happy? Well, not quite. Facebook has been caught between a rock and a hard place for a long time, being criticized mainly by citizens, governments and some activists for not taking down enough “hate speech” and “disinformation”, and by other citizens and activists (but not governments) for taking down too much legal content.

The first faction goes so far as to say Facebook is a threat to democracy because it allows people to sow hate and manipulate the public discourse and even elections. The latter argues it’s a threat to freedom of expression because it blocks content that’s actually legal in the respective countries on the basis of Facebook’s community standards. And because it is so dominant that people can’t just choose an alternative platform – therefore constituting part of the public realm itself – this means that the discourse itself is jeopardized.

Walking the tightrope like a drunken elephant

So Facebook has been walking a tightrope for years, albeit like a drunken elephant most of the time.

For now, I’m not going to discuss the validity of the “Facebook is a threat to democracy” position here, I’ll just tell you that I think this claim is overblown and harmful in itself because it ascribes way too much power to Facebook, detracting important discussions about various causes for democracies being indeed threatened. But the argument is out there, made all around the world, and it’s powerful enough to pressure the company to act on it. Germany’s much-discussed network enforcement law is a specifically aggressive example, providing for fines of up to 50 million Euros in cases where social networks systematically fail to take down illegal content quickly.

The law has been criticized harshly by all stakeholder groups for providing incentives to “overblock”, meaning that networks would erase borderline content to avoid fines. But free speech advocates also direct their criticism at Facebook for having done this anyway for years, using their “community standards” (which are of course corporate standards used to control users, not standards developed by a community) to block content that are perfectly legal in respective jurisdictions.

The problem with jurisdiction

This seems to be a dilemma impossible to untangle, because making the law of one country (i.e. the US) the basis of what’s admissible would mean that Facebook would be unable to operate in many other countries in the world (don’t just think Saudi Arabia, also think Germany or France), but making the laws of all the countries it operates in the basis would either result in no content to look at or no cross-border network. So both options are no-go areas for the company but also champions of free speech. Many governments seem to like the option where their laws apply and will be enforced fully (naturally, they’re governments!) – but that has mostly to do with policy makers’ ignorance about the “series of tubes” that constitutes the Internet.

Against this background, Facebook’s plans start to make a lot of sense. If you have to (more or less) abide by the laws of all countries your company operates in but want to keep as much content on the platform as possible, you need to compromise. So far, the company applies its infamous former motto “move fast and break things” to societies’ standards all around the world, publishing billions of illegal posts, pointing to the fact that Facebook’s place of jurisdiction is California. Now that the pressure mounts from countries where Facebook makes tons of money (that it wants to keep making), it has to come up with something better.

Ill-defined concepts make for a flawed process

This something better Facebook proposes is a combination of technology and governance structures. On the basis of Facebook’s “Community Standards”, “AI” “proactively” identifies “harmful” and “borderline” content, avoiding “algorithmic bias” and giving users “control” of what they see, accompanied by an “appeals process” with “independent governance and oversight”, supported by “transparency” and “research”, “working together” with governments on “regulation”. In this summary, I put in quotation marks all the expressions that are badly defined or fuzzily applied and therefore highly debatable. Not much left except conjunctions and articles, right? Let’s look at it in detail.

“AI” in this case means filters trained to identify content that falls in specific categories. Sounds great coming from one of the world’s biggest tech companies that has one the highest budgets for AI research and development, right? But speech – in the sense of the meaning of words in a specific context – is one of the most complex things to classify and right now, automated content filters fail miserably at this job. It’s hotly contested whether they’ll ever be able to succeed at this but that’s a philosophical question. Right now, they can’t reliably identify what “harmful” is (there goes “borderline” with it), so when Zuckerberg says that “We are also making progress on hate speech, now with 52% identified proactively”, than we should read the sentence as saying “our systems classify a ton of posts as hate speech that then never see the light of day, therefore not being assessed by a human being, which means we have no idea how much of them are false positives, meaning content being taken down erroneously, harming freedom of expression.” Yes, this is (just) a little bit simplified and there are ways to control for false positives and biases. The people at Facebook are no dummies and I’m sure many of them want to do the right thing – but as Zuckerberg himself writes, “Overall, this work is important and early, and we will update you as it progresses” – read this as saying “we don’t know what we’re doing but that’s okay because no one is.” On top of it, as long as we have no independent research on this (see below), we should not take a company’s word for it that is being caught lying every other month.

More content, more control, less violations – has Zuckerberg found the holy grail?

So what does that leave us with on the “Giving People Control and Allowing More Content” front? Not much, I have to say, although right now I think I very much like the underlying idea. As I said, when it comes to speech, context is everything. One person’s lie is another person’s parody, one person’s freedom fighter is another one’s terrorist. This will never change (and it most probably shouldn’t). So putting people in control of what they want to see is a good idea. But if the choice given to users is based on a flawed definition of harmful and borderline content (the “Community Standards”), followed by a flawed classification of harmful and borderline content (“AI”), there’s going to be a pretty bad sample to exert “control” over. Combine that with Facebook’s intent to make this control an opt-out option, and we’ll probably end up with a system that is better than the one right now in theory only, but not in practice.

So what about an appeals process with independent governance and oversight? As with the “put people in control of what they see” proposition I’m very much in favor of the idea. We’ve been saying for quite some time that platforms shouldn’t be arbiters of speech. This does not only apply to Facebook alone of course, but also Google, Twitter and others, and I personally criticized the European Court of Justice’s “right to be forgotten” decision for this reason. At the same time, courts cannot decide about thousands of contested posts, search results, videos and tweets. There has to be a cascading system. But the system’s end point can’t (of course) be within a company. With stuff not taken down, that is the rule in many countries. Because if companies keep content up, people can go to court to have it taken down (I know, in practice this is only possible in few countries in the world and even there, companies make the process as difficult, costly and frustrating as they possibly can, first and foremost Facebook itself – but it still is part of the legal process, at least theoretically).

An appeals mechanism without teeth

But when it comes to content deleted that you want to have reinstated, let’s say because you find it’s perfectly legal in your jurisdiction and you should be allowed to publish it, the buck stops at the companies’ “deciders”. Why? Because companies are companies and they do not have a (legal) duty to protect freedom of expression. They can just apply their house rules and cannot be forced to publish anything. Which fundamentally is a good idea because freedom of expression also comprises the right to choose what you want published on your platform. But this argument becomes more than just a little problematic when your platform has become so dominant that it constitutes a(not the) public realm itself. You can think along the lines of a public utility or an infrastructure for speech. I would argue that with Facebook (and Google search and Youtube) we’ve arrived at this point.

So Facebook’s new appeals mechanism is a step in the right direction but of course doesn’t fundamentally change the fact that the decision stays within the company. What is fascinating and probably may even be a turning point in the discussion is that Facebook now acknowledges this by proposing to appoint an independent board for governance and oversight “whose decisions would be transparent and binding” to the company. So if someone appeals a decision made by Facebook’s moderators, it may end up at the board that than makes a decision that the company has to honor.

Sounds great, doesn’t it? Independent oversight! Facebook cedes power to a board of experts! I don’t know if it’s just me, but I simply have no clue whatsoever how this is supposed to solve the underlying problem of the impossibility to decide about speech on a global level. I’ll leave aside all the procedural problems that come with a body that’s supposed to be independent from the company that pays for its existence and the work it does (because there will be a lot of work here, no one can do it pro bono); just read the sentence “as our board of directors is accountable to our shareholders, this body would be focused only on our community” and let it think in; you’ll see what I mean.

No common ground

I’m talking about what happens with a case of Holocaust denial that’s illegal in Israel and Germany and France and 14 other European countries but not in the US and most other countries in the world. I’m talking about what happens with cases of nudity that are illegal in the US but not in the Netherlands and Sweden. I’m talking about the fact that even between countries that honor freedom of expression, there’s no agreement on what this freedom comprises because it differs from country to country, their societies and norms, and finally their laws. I’m talking about the fundamental challenge that comes with the fact that Facebook is a global platform operating across 180 plus national jurisdictions.

I don’t want to call it a problem, because when it comes to freedom of expression, most of us see national jurisdictions as a feature, because they respect cultural contextuality. To global platforms, this is a bug. And because we now have global platforms, it’s also a bug to freedom of speech advocates (like myself) because we simple don’t have an idea yet to reconcile free speech with the existence of global platforms and national, contextual laws. (Disclosure: Because of this, we declined to make suggestions who to nominate – not for the board itself, but a selection committee for the board – when we were approached by Facebook Germany’s policy people some weeks ago.)

Regulation? Not really

Having said all this it becomes difficult to see Zuckerberg’s suggestions for transparency, enabling research and collaboration on regulation in the most favorable light. I heard people say that it’s interesting to see platforms changing their approach to regulation from flatly fighting it to even asking for it. Zuckerberg’s text seems to be a case in point. But looks can be deceptive. I agree to a great extent that “defining the acceptable rates of different content types” is a very good idea to explore because in “reality, there will always be some harmful content, so it’s important for society to agree on how to reduce that to a minimum – and where the lines should be drawn between free expression and safety”. But “AI”-driven, “proactive” management (blocking and hiding) of “harmful” and “borderline” content, combined with regulation that only forces companies to report certain “metrics” is – for the reasons laid out above – just not gonna cut it.

So this “blueprint” may work as a means to appease critics and legislators and get Facebook out of the regulatory spotlight it hates so much (though I hope it won’t). As a “Blueprint for Content Governance and Enforcement”, not so much.


Image: Three Judges, Honoré-Victorin Daumier (Link)

Did you find apk for android? You can find new Free Android Games and apps.

Posted by Matthias Spielkamp

Leave a reply

Your email address will not be published. Required fields are marked *