Facebook finds itself a target of public condemnation yet again, this time after its refusal to fact-check or flag political speech from elected officials drew a sharp contrast to Twitter, which in recent days took the unprecedented step of adding labels to President’s Trump’s tweets, either for violating its rules or for questionable factual accuracy. Many hailed Twitter’s move as a long-overdue but welcome change to curb the spread of misinformation and potentially dangerous speech that could incite violence, while denouncing Facebook’s defense of its self-professed value of championing free expression as an abdication of the platform’s moral responsibility. Scores of current and former employees of Facebook took to social media to express their disapproval, and some employees even resigned from their jobs. Should Facebook have followed Twitter’s lead, which also banned political ads?
A communications platform such as Facebook, in its ideal form, is necessarily a reflection of our own society – an honest and accurate reflection is the ideal because that means it is free of artificial distortions introduced by the platform itself. And in our society, despite the common misconception, one does not have an unchecked, constitutionally guaranteed right to freely express oneself anywhere, in any manner. What the constitution protects against is the government’s imposing restrictions, through legislation, on the right to free speech, which means non-government entities – individuals, social units, private organizations and enterprises – can and do impose certain constraints on what one can say, write or otherwise express. What those constraints are, in turn, is determined by an organically formed common understanding of social norms that we collectively find acceptable in each entity’s context, weighing the costs and benefits of such curtailment of freedom. What is permissible speech and what isn’t, ultimately, are products of social convention, not some absolute and immutable moral truths.
The problem with calls for Facebook to tighten its controls around what is permissible speech on its platform is that they are asking for constraints in dimensions on which our society as a whole has not reached consensus understandings (as opposed to some where we do have consensus, such as that pornography should be excluded from some places, including Facebook). Take fact-checking political claims, for example. Often the veracity of such claims is a matter of degree, and there are not bright and clear lines we can agree on that separate truth and falsity. What if you make a numerical claim and you are off by 10%? Should that be flagged as factually inaccurate? What if you are off by 5% or 1%? What if an accurate measurement is impossible to begin with and everyone is talking in estimates? How will we then determine what is correct or incorrect? What if one makes a claim that is open to an offensive interpretation but unfalsifiable, such as “this person may be a criminal, or maybe not”?
Aside from factual accuracy, Twitter also flagged one of President Trump’s tweets for “glorifying violence.” What does “glorifying” mean? Even if we could agree on what it means, if “glorifying” is not acceptable, how about “personally holding in high regard?” Is saying “I love violence” allowable? If the problem is the potential to cause a violent outcome, do we agree on what is “violent”? What if I wrote on Facebook, “Hello David, please go and punch Sam in the face?” Should that be banned? How about “I command you to tickle Sam until he feels significantly uncomfortable?” We simply don’t have well-observed agreements on any of these notions.
When we ask Facebook to create and enforce rules that limit permissible speech in ways around which we do not have broadly accepted consensus understandings, what we are asking in effect is for Facebook to do as it pleases, to make its own rules. That is ironic in a world where many have raised alarm that social media platforms such as Facebook already have too much power to influence our lives. Should Facebook have even greater power, so that it not only elevates and amplifies certain content, but actively obscures and censors others?
What we should want instead is for Facebook to have less power. The insidious effect of the enormous power the likes of Facebook hold today is that they have unparalleled control over the flow of information and dictate what we see, read, and watch, and thereby be singular influences on what we think, feel, and, ultimately, do. What we should really want is for Facebook to relinquish that control, eliminate the algorithms that selectively amplify and promote certain content in ways that can be manipulative, and give the full control of what the users see, read, and watch back to the users themselves. This will not, of course, eliminate misinformation and hateful speech from existing online, just as they exist, unfortunately, in our society. But the removal of the selective amplification machinery of Facebook will reduce the incentive for divisive and sensationalistic content to reach ever more toward the extremes in the hopes of being promoted by the algorithms. When social media platforms go back to being faithful reflections of our society, without the distortions of self-serving algorithms, they may even have the potential for being used as tools for the betterment of our society.