Freedom of expression and Facebook: Shall local FoE laws prevent Facebook from blocking content?

This is not so straight forward as could be thought. On the one hand one might argue:

Facebook is a private company. They offer a service to users (free of charge but with ads), and since they own it, they should be able to control it as they see fit. If users don’t like it, they can go elsewhere.

We can call this the market argument. The counter argument (as phrased by Falkvinge) is this:

“At the end of the day, this is about the fact that the public square, where freedom of speech used to be enforced, has moved in under the terms-and-services umbrella of a private corporation, where they enforce their own arbitrary limits of what may be expressed and not. That means our fundamental rights have effectively moved into the hands of private interests. I welcome a challenge to this doctrine and an enforcement of freedom of speech, once a public discussion forum – like Facebook – has grown large enough to be a de-facto public location, if not the de-facto public location.”

Or, to put it my way:

The point above about it being a private company is valid. However, the “users can go elsewhere”-part is not very realistic given the near monopoly of Facebook. It just so happens that certain types of services, such as social networks, function better the more users use them and thus tends to lead to near monopolies with one or only a few dominant players on the market. When this happens, users face a choice between using a service that’s useful (where the other users are) and one that has strong protection of freedom of speech (is it exists). When such services also become a very important part of life (as measured in percent users of total population, e.g. for the US, about 48% of the population has a Facebook user) for important matters such as communication, there is reason to enforce freedom of expression (FoE) on their services despite them being privately owned, because not doing so would in practice mean that private companies would decide limits to FoE which could have negative effects social consequences because certain topics could not be discussed.

By now, unless you are some kind of libertarian/anarcho-capitalist, you should be convinced that the issue is not so straight forward as it might appear.

Enforcing freedom of expression in practice

Suppose we go ahead and say that freedom of speech must be protected on Facebook in as far as it is in the country normally — i.e. not that much for most countries, and even most Western countries limit freedom of speech in important ways — this must also be done on Facebook, how would this happen exactly?

Suppose a French native based on France creates a user which is located in France. Suppose that France’s FoE law is pretty broad: allows nudity, hardcore porn of any type as long as it consenting adults, hate speech, blasfemy, racism, sexism, wrong claims of convictions, etc. Now, suppose the French user starts posting porn and Facebook doesn’t like porn. Facebook might want to delete it and perhaps block the user (current practice in fact). However, if FoE was enforced here, this would not be legal. Facebook has some options:

  1. Make a filter option that by default is turned on (somewhere hidden in advanced settings) which hides any kind of content, including porn, that Facebook does not like.
  2. Show this content only to users from France.
  3. Show this content only to users from countries with protection for porn expressions.
  4. Disallow people based in France from creating profiles.

Now, (4) seems like an unlikely option. It would cause Facebook to lose a lot of ad revenue. (1) could potentially by struck down by a court as de facto limiting FoE too much too, but may work for most purposes. If the purpose for blocking porn is that some users are (thought to be) sensitive to it, then this option would work fine. Choosing (2) or (3) would be limitation of FoE for cross-national purposes (if the user has friends based in other countries). Whether this could be struck down by any national court is a good question. Who has jurisdiction for cross-national speech? Both countries by themselves? Both countries in joint?

In any case, (1) seems like a realistic choice. If some material is reported as being over the limit, it can be put into the ‘dangerous stuff’-stream not shown to most users (presumably, unless it becomes very trendy to disable the filter). One could also combine (2-3) with (1). But it is certain that were Facebook to use one of these methods, it could add a considerable cost to them by maintaining an updated database for the laws of each country and classifying content into the correct categories that can and cannot be shown in this or that country.

Other civil liberties?

But why stop at FoE, what about other civil liberties? Should we also enforce them on Facebook if relevant? If basic law/constitution/ground law/etc. includes due process requirements, does this mean that any decision on Facebook regarding citizens from that country must adhere to local due process laws? This can be problematic. Suppose two citizens are involved in a case, and they are from different countries with different due process laws. Which country’s law should be adhered to? Both? Just one of them? What if the laws are inconsistent so that enforcing both is impossible?

Given the recent NSA spying scandals, one might wonder about right to privacy laws. What if adhering to country X’s laws means that some user data must be protected, while adhering to country Y’s laws means that they must be openly available to secret agencies without a court order (even a secret one)? It is not clear.