Are Facebook and Its New Oversight Board Up to the Task When It Comes to Health Misinformation?

Picture by Glen Carrie/Unsplash - https://unsplash.com/photos/ra4vJwxnvAo

Share this Post

With much fanfare, Facebook’s newly established Oversight Board recently released its first batch of decisions on the social media company’s contentious content moderation practices. All but one of the board’s initial decisions, published on January 28, were determined unanimously. Perhaps aiming to emphasize its purported institutional and conceptual independence from Facebook from the outset, four of the five cases overturned the platform’s decisions to take down pieces of content while only in a single instance, the original ruling was accepted. The matters at issue were clearly meant to reflect a representative selection of some of the most contentious subjects prevalent on social media today, ranging from female nudity in the context of health education, to anti-Muslim hate speech and pandemic-related misinformation. Reactions so far have been cautiously positive but not without some misgivings, both in view of the details of the particular cases under scrutiny and in relation to more principled concerns about the process itself. The Board’s general approach and procedure appear reasonable at face-value, but how does it fare in its attempt to address the urgent and still growing problem of COVID-19 misinformation on Facebook?

The need for independent oversight

Facebook created the Board in response to persistent demands by civil society actors to increase accountability in relation to decisions that affect its users’ freedom of expression. The Board is conceived as an external, independent body that takes up the burden to review a select number of decisions about what content to leave on the platform and what to take down. Projected to eventually consist of 40 members from around the globe, the Oversight Board’s calls are supposedly binding on the company. Considering the fact that Facebook routinely makes tens of thousands of such decisions each day, most of them with the help of automated algorithmic systems, the Board’s role will be limited to scrutinizing only a few emblematic cases that are either difficult, politically significant, or otherwise globally relevant in the sense that they may appropriately shape Facebook’s future content moderation policies.

The Oversight Board’s principal task is to find an appropriate balance between the social media company’s guiding principle of “Voice” – that is, the right of all users to express themselves freely and without constraint when posting content on the platform – and “Safety”, which roughly denotes every form of speech that might turn out harmful to other users, Facebook’s “community”, or society writ large. In doing so, the Board ought to apply Facebook’s own “Community Standards” as well as a rather vaguely defined body of international human rights law, most prominently epitomised by the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights.

The ‘imminent harm’ of medical misinformation

The one case tackling the problem of misinformation unfolded in October 2020, when a French user posted a video in a public group on Facebook that revolved around the COVID-19 pandemic. The video and accompanying text criticized the decision by the French public health authorities not to green-light the use of hydroxychloroquine combined with azithromycin as a remedy for the disease, calling it both effective and otherwise harmless. After it had been viewed about 50,000 times, gathered more than 800 reactions, and shared roughly 600 times, Facebook removed the post, citing its WHO- and health expert-guided policy to disallow content that rises to the level of “imminent harm”. Users, the company argued, might be led to ignore precautionary health guidance or even self-medicate after being exposed to the content in question.

In its decision, the Oversight Board disagreed with Facebook on all substantive points. In particular, it denied that the video and text met the “imminent harm” standard, which is stated in Facebook’s own rulebook and formed the basis of the original decision. Holding that the post did not in fact encourage other users to self-medicate with hydroxychloroquine but mainly aimed at criticizing French governmental policies, the Board argued that the company “failed to provide any contextual factors to support a finding that this particular post would meet its own imminent harm standard”. While the Board acknowledged that “misinformation in a global pandemic can cause harm”, it did not find sufficient ground for limiting the user’s freedom of expression in this instance. It thus ordered Facebook to reinstate the original content and advised it to create a new Community Standard on health misinformation in order to clarify its rules and to more clearly define the policy’s central notion of misinformation.

Facebook and the Oversight Board at odds

Facebook responded to the decision by consolidating information about its policies on COVID-19 misinformation. At the same time, it openly refused to heed the Board’s recommendation to employ less intrusive measures in the future, such as downranking or adding informational labels, instead of removing false or misleading health-related content outright. Referring to its consultations with the WHO and global public health experts, the company contended that “if people think there is a cure for COVID-19 they are less likely to follow safe health practices, like social distancing or mask-wearing”. Consequently, it committed to maintaining its position to remove any content with false or misleading claims that are “likely to contribute to the risk of increased exposure and transmission or to adverse effects on the public health system”.

While the company’s reaction to the Board’s decision, just as the original removal of the post in question, might seem well defendable, the Board is of course correct in its observation that Facebook’s rationale contradicts its own “imminent harm” standard. What it did instead was to tacitly introduce an entirely new, much broader criterion for objectionable health-related misinformation: “likely contribution to risk of harm”. This mismatch points to a fundamental dilemma that all regulators, companies, and decision-making bodies invariably face when dealing with misleading or false information that could potentially prove harmful to its audience: The adverse effects of exposure to most dis- or misinformation unfold slowly. Instead of directly manipulating a target audience’s behaviour, immediately leading to harmful consequences, most false or intentionally misleading content gradually affects peoples’ attitudes. Whereas such outcomes are no less real and corrosive, they are much harder to reliably track and measure. Even in situations in which there is a demonstrable correlation between a piece of misinformation and harmful effects, it is incredibly difficult to establish a causal link between the two; for that to be the case, there have to be users who actually act in harmful ways (by ingesting hydroxychloroquine or refusing to wear masks in public, for example) because of having been exposed to the post in question. This inherent difficulty has an impact on the “imminence” calculus as well: It will rarely be possible to show that a piece of misinformation posted on Facebook contributes to “imminent harm”.

Facebook’s articulated stance to err on the side of caution amid a global pandemic is reasonable, and in line with the recommendations of most global health experts, including from the WHO. Its reluctance to comply with the Oversight Board’s decision is understandable. But the company should be more honest and explicit about the standard it applies and why. It’s not that health misinformation is not harmful. Quite the contrary. To establish that adverse effects are not just possible but imminent, however, might be an impossibly high threshold to meet. To be sure, applying a lower standard for removal bears the risk of disproportionately limiting the freedom of expression, so the Oversight Board is right to emphasize that Facebook must proceed with care. Yet given that controversial content routinely generates the most user engagement that can then be monetized, perhaps the company deserves some credit for trying to address the problem for once. Either way, it should be transparent about the standard it applies in such cases. Although this won’t guarantee that the Oversight Board will accept the removal practice in the future, at least it will enable a more sincere and reasonable discourse about this eminently important issue.

 

————————————————————————————————————————————————–

The Israel Public Policy Institute (IPPI) serves as a platform for exchange of ideas, knowledge and research among policy experts, researchers, and scholars. The opinions expressed in the publications on the IPPI website are solely that of the authors and do not necessarily reflect the views of IPPI.

Spread the word

Share this Post

Read More