“Legal tools to suppress disinformation can do more harm than good”
Share this Post
Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Dr. Henning Lahmann, senior researcher and head of the International Cyber Law program at the Digital Society Institute at the ESMT Berlin.
Henning, can you tell us a bit about what you do and how disinformation became part of your work?
My work deals with everything that concerns international law and the Internet, and cyberspace more generally. A few years ago, at the beginning of the Trump presidency and after the Brexit Referendum, people who before dealt mainly with traditional cyber operations saw disinformation as closely related, and turned to this topic. This was my entry point as well, because I was dealing with cyber security, but I quickly realized that these issues are not as connected as people thought, or still think they are. In fact, I think that they are completely different topics that need to be analyzed separately; but that’s how I got my start.
How are disinformation campaigns different from cyber attacks?
The difference is in the impact: A cyber attack is an operation that interferes with an information technology system with immediate effect. For example, in a ransomware attack, malware is installed on a target computer and it becomes encrypted so that its user can no longer access the information unless they pay the ransom. That’s not how disinformation works. Disinformation aims to affect a person’s behavior or their attitudes – it’s a process. There are no immediate effects in most cases, and even if there is a change in one’s attitudes or behavior, it’s extremely difficult to establish a causal link. For example, we can show that the Russians disseminated disinformation before the 2016 U.S. presidential election, but to this day, we haven’t been able to prove that even one American voted differently, or decided not to vote, because of it. With disinformation, we can demonstrate the conduct, but we cannot reliably link it to the effect, which is very important from a legal perspective.
Looking at how our understanding of disinformation developed, how big is the problem right now? How worried should we be, in your opinion?
I think we should be worried. Especially in the context of the pandemic, we’ve seen massive disinformation about the virus, its remedies and the vaccines, and there’s reason to believe that it impacted vaccine uptake, for example. At the same time, we can’t really see what kind of disinformation leads to what outcome. What we do see is an overall effect, and the overall effect is dangerous – both in terms of the pandemic and how we deal with that, but also in terms of a general loss of trust in democratic institutions. I think these effects are real and we have been witnessing them over the past five years. It’s a growing problem and we need to find ways to deal with it.
What can be done from a legal point of view to tackle this challenge?
The instinctive legal approach is to find out who’s responsible for a certain outcome and to punish them, or to sanction them so they will stop what they are doing. But this has been a very futile effort so far. Since we cannot simply ban all untrue statements on the internet, the only way to hold anyone accountable is to prove that their content is causing harm and is directly connected to adverse outcomes, such as more COVID-19 deaths or lower voter turnout; this difficulty in proving causality has been the problem. We are now seeing the limits of the legal approach in tackling disinformation, an approach that seeks to punish malicious actors. We know that they exist, but we struggle to hold them accountable.
Did the pandemic make a difference in how lawmakers and legal scholars approach this problem?
It mostly added a sense of urgency. When this topic first emerged after the 2016 elections and the Brexit Referendum, everyone was very concerned, but at the same time, it didn’t appear to be a matter of life or death. Since the pandemic, however, there has been a realization that disinformation can cause immediate harm. Legal scholars, but also other practitioners and policymakers, are realizing that we need to find new tools to tackle the problem more directly, and not rely simply on fact-checking and media education. In the past, when disinformation was perceived to be mostly related to political issues, policymakers were very reluctant to say, ‘we need to suppress this information,’ because of free speech considerations. But as soon as it came to health-related disinformation connected to COVID-19, there was a much greater willingness to argue that rules must be devised to directly suppress and delete online information that may be harmful.
Do you expect the trend of speech suppression as a strategy for combating disinformation to continue?
I think the answer is, sadly, yes. I think that there is now a broader shift towards trying to tackle the problem by suppressing speech, and this could be applied more generally, to other types of information. In Germany, in the runup to the federal election in September there was a debate as to whether certain narratives should not just be countered but actually suppressed. I do think that this a result of how we dealt with pandemic-related disinformation over the past year, which is a concerning development in terms of the erosion of speech protections. France, for example, plans to establish a government agency that is meant to identify foreign influence campaigns, but will de facto decide what content is ‘true’ and what is ‘false.’ It reminds me of the “Ministry of Truth” in Orwell’s dystopian novel, 1984. There’s a real danger of government overreach – which perhaps was justified in the context of the pandemic but as soon as we start applying this approach to other issues, we are on a downward trajectory.
Given the limitations of the legal approach, can it offer solutions to the problem?
We need to rethink our view that it is impossible to hold social media and other digital media companies accountable for any content posted on their platforms. They may not have published the potentially harmful disinformation, but they need to have mechanisms in place to check such content, and as a last resort, to take it down. This would mean changing the law. Meanwhile, that discussion has already begun in the US, pressuring platforms to take preemptive measures to regulate themselves while keeping the law as is. One such example is Facebook’s Oversight Board, which will rule regarding whether content should be taken down in disputed cases. At the same time, it’s important to find ways to require platforms to find alternatives to suppressing content, such as educating their users to recognize when they are exposed to potentially false information. We’ve seen promising attempts by the platforms to do so during the pandemic, for example by displaying links to official information sources next to posts about COVID-19. I think it’s a step in the right direction, which doesn’t patronize users but takes them seriously and offers counter-narratives and fact-checks, showing them ways to find resources and inform themselves.
Having more rules and regulations is often perceived as a positive development, but is there a concern these might be misused by less democratic governments?
Absolutely, and this relates to the problem of defining what exactly counts as ‘disinformation.’ A lot of the potentially harmful narratives out there are not actually wrong, but we identify them as ‘misleading’ or as ‘bending the truth.’ As soon as we consider this as ‘disinformation’ that we need to suppress, it becomes easier for other governments – including those that already suppress information that counters their interests – to do the same.
And it’s not only China and Russia. We are seeing examples of governments introducing more restrictive ‘anti-disinformation’ laws in Asia and Latin America. This is also happening in Europe, in Hungary and in Poland. And there is always the concern that others might adopt a similar approach. Disinformation is definitely a problem for democratic societies, but we shouldn’t overreact, because the more we suppress potentially harmful speech, the more we give other, less democratic governments an opening to be more restrictive. In the long run, this could be more harmful to everyone than disinformation.
What do you think is the greatest challenge legal scholars currently face when trying to tackle disinformation?
Legal tools to address any problem always run the risk of becoming too paternalistic. We need to find solutions that take the users, the targets of disinformation, more seriously and empower them rather then tell them what to believe. Of course, I believe there are concrete truths about the virus and about vaccination, and whether Hillary Clinton heads a pedophile ring. But, at the same time, if there are people who want to believe false narratives, we should remain tolerant of that, because democratic backsliding is ultimately more harmful. This might be the biggest challenge for legal scholars and political decision-makers – to realize that there are limits to our approach, and that our legal tools can do more harm than good.
Dr. Henning Lahmann is a senior researcher and Program Leader International Cyber Law at the Digital Society Institute at the ESMT Berlin. His work focuses on cyber security and transnational security, disinformation and information operations, human and civil rights, privacy and data protection, internet governance, and the use of AI in conflict situations.
Since May 2020, Henning has been a researcher at the project “Protection of the Global Information Space“ at the Geneva Academy of International Humanitarian Law and Human Rights. From August to November 2019, he was a visiting research fellow at the Israel Public Policy Institute in Tel Aviv, funded by the Heinrich Böll Foundation, conducting a project on disinformation campaigns and information warfare in international and public law. He remains a non-resident research associate at the IPPI.
This Interview is published as part of the Media and Democracy in the Digital Age platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich Böll Foundation.
The opinions expressed in this text are solely that of the author/s and/or interviewee/s and do not necessarily reflect the views of the Heinrich Böll Foundation, the Israel Public Policy Institute (IPPI) and/or the German Embassy Tel Aviv.
Share this Post
The Gender Gap in AI
The field of Artificial Intelligence (AI) has grown exponentially as the world is increasingly being built around automated…
What is Open Government Data?
What is open government data? Open Government Data (OGD) is not just about openness and transparency, nor is…
The Israeli Green Paradox: A Global Cleantech Hub Catching up with Climate Change
Edition date : October 2021 With nearly 650 companies and startups in clean technologies or Greentech (solutions that…