This commentary critically assesses current trends in online media control in liberal democracies. It argues that protecting society against disinformation campaigns has become a prevailing and increasingly successful legitimatory strategy for governmental restrictions on the free flow of information. This situation has enabled governments to design and implement new forms of origin-based regulation of information flow, with potentially detrimental effects on global internet freedom and trans-border communication. Content moderation practices by major platforms may even increase such trends, operating as they do in the shadow of hierarchy. They seek easily applicable solutions in order to accommodate public concerns and prevent stricter governmental control. The present commentary discusses these trends and suggests alternative ways for tackling disinformation.
In late September 2021, YouTube banned two German-language news channels of RT (formerly known as Russia Today) from its video sharing platform. RT is a media company funded by the Russian government, criticized for both being influenced by the Kremlin and spreading propaganda and disinformation in Western public spheres. YouTube justified its decision on RT’s continued violation of the platform’s COVID-19 misinformation policy. However, the Russian government interpreted it as an offensive act steered by the German government, and announced retaliatory measures. The rhetorical escalation regarding Germany is better understood in context, given that RT had previously tried to receive an official broadcasting license for Germany: The German authorities blocked it. In the case of the YouTube ban, however, the German government officially denied any responsibility.
This commentary is not about deciding about these allegations from either side, or even the issue itself. It is certainly not about downplaying the detrimental effects that disinformation campaigns can have or the role that the Russian government may be playing in international information operations. However, on a more abstract level, the case is illustrative of three current, interconnected trends in both international affairs and online media governance. First, there is the trend of extending measures of online content regulation, including in liberal democratic societies. Second, the threat of disinformation increasingly justifies content restrictions, a pattern of political reasoning which seems even more prevalent in the current context of the pandemic and sluggish vaccination campaigns. Third, perceptions of the threat of disinformation have opened avenues for the origin-based control of information flow, following a structurally national rationalization. This is likely to reduce global freedom of information in the years to come.
Internet freedom in decline
Trends towards increased online content regulation are not really new: They have been evident for at least a decade. The most recent Freedom on the Net report issued by Freedom House has exposed a global decline of Internet freedom for the eleventh consecutive year. This development has not only been produced by autocratic or hybrid regimes; liberal democracies too have adapted their policies, and are thus co-responsible for the increases in control. Recent prominent examples of national legislation include the German Network Enforcement Act (NetzDG), and the French Avia Law (loi Avia) essentially inspired by the NetzDG and contested by the Constitutional Court. Another example addressing disinformation directly is the French law against fake news (loi infox). The loi infox was adopted in late 2018 after the 2017 presidential election campaign had been targeted by cyber attacks and foreign disinformation. These are only three examples of the current, prevalent reform activity in democratic countries.
Fighting disinformation as legitimatory strategy
While the abovementioned regulatory reforms (except for the loi infox) were initially directed towards fighting hate speech and extremism, we are now seeing a more recent trend: This is the predominant problematization of disinformation regarding electoral manipulation or – since the onset of the pandemic – public health issues. As one would generally expect a deep-rooted tension between democratic media policy and freedom of expression, and even an inherent “bias against control” for liberal democracies, restrictive measures can only rely on legitimatory discourses that are likely to gain public acceptance. Arguably, fighting disinformation has become the most compelling pattern of justification in this regard.
Going for the origin of a message is easier than going for the content
Governance measures including state legislation do not just appear in notice-and-takedown rules; they include further trends towards a new type of media control which is basically origin-based. Taking the foreign origin of an actor as an implicit or explicit criterion for blocking, removal or discrimination against the content he/she spreads is easier than detecting harmful content. In this sense, the recent upturn of FARA-like legislation can be seen as a trend towards origin-based regulation. FARA is the abbreviation for the US Foreign Agents Registration Act (FARA), originally established in 1938 with the intention of countering covert actions by foreign countries. US FARA was reformed after the Presidential elections in 2016.
Australia approved FARA-like legislation in 2017 as a countermeasure to alleged Chinese information operations. As a reaction to the Brexit referendum in 2016, the UK government proposed legislation clearly and explicitly inspired by US-FARA legislation in 2020.
So far FARA-like regulations are still fundamentally criticized within the EU due to prevailing concerns that such policies may result in stigmatization and also be detrimental to the cross-border activities of NGOs and journalists. However, Baltic states have used EU sanctions against Russian oligarchs as a pretext for actually banning RT TV from their public media spaces. Furthermore, the French loi infox includes a provision against foreign media companies like RT and Sputnik.
Additionally, beyond governmental regulation, the content moderation practices of major platforms may even increase the abovementioned trends towards origin-based regulation of information flow as platforms operate in the shadow of hierarchy. They are seeking easily applicable solutions in order to accommodate public concerns and prevent stricter governmental regulation.
Future effects and alternative approaches
All in all, current trends in origin-based regulation of information are likely to further reduce Internet freedom, freedom of expression and the transnational flow of information. Of course, fighting disinformation is an important challenge for our society, but it should be exerted with great care and transparency. A structural nationalist rationalization of demarcated information spaces that would require protection from external influences should be avoided even when the democratic discourse or election campaigns may be affected. Common guidelines should be developed for social media providers, requiring more transparent forms of content moderation and artificial amplification. However, such rules should be generic and should apply regardless of the assumed foreign origin of an actor or message.
This Commentary is published as part of the German-Israeli Tech Policy Dialog Platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich Böll Foundation.
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Israel Public Policy Institute (IPPI) and/or the Heinrich Böll Foundation.
What are Filter Bubbles and Digital Echo Chambers?
Many experts are concerned that the curation of content on social media platforms limit our chances of encountering…
Climate change solutions: An overview of innovative technologies
The urgent threat of climate change requires swift and decisive action from all quarters. As called for in…