"We work to make the political campaigning environment more trustworthy, transparent and comprehensible for people."

Photo by mohamed_hassan / Pixabay - https://pixabay.com/it/vectors/social-media-marketing-facebook-6134993/

Share this Post

Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Sam Jeffers, Co-Founder of “Who Targets Me”, which monitors the use of online political ads in real time and provides analysis of their intended impact.

Tell us a bit about yourself. What do you do? How did you become involved with disinformation?

I spent seven years working in and around digital campaigns (the Obama campaigns of 2008-2012, campaigns in France, India, Brazil, the UK and Ireland), trying to work with progressive political parties to get their message across and win elections. I left my job in 2016, but one question clearly emerged from that 2016 cycle of elections (both Brexit and Trump): “What’s the power of social media, and particularly, of social media advertising to influence people in election campaigns?” In response to that, I joined forces with a colleague to build a tool to try to work out what was going on in real time and to reverse-engineer local advertising strategies. To me, the most interesting part is less about seeing malign actors causing trouble, but instead how to make the political campaigning environment more trustworthy, more transparent and more comprehensible for people. Political campaigns in the digital age are full of technique, cleverness and persuasive people trying to get an advantage over their opponents. Crowdsourced data collection allows us to see things through people’s eyes in real time: What’s the balance of the different information that people see? How much right-wing advertising do they see relative to left-wing advertising? How much do they see from candidates as opposed to parties? How much do they see on some topics relative to others? How much are they targeted because they are active within a political party rather than simply being courted as prospective voters? That gives us a plenty of different ways of thinking about putting up new guardrails around democratic processes.

What are some of the key questions that you’ve been focusing on?

One question that we are looking at is the actual frequency of political advertisement. In 2016, there was this myth that people were flooded with political advertisement. However, outside of an election period, political advertising is usually less than 1% of all Facebook advertising. When election periods roll around, it might be about 15%. People will notice political advertisements, they will be a relatively frequent part of their newsfeed, but for a short period of time. The second question varies a lot by country to country and obviously, by electoral system: What breeds of campaign do people see? Do they see content from only one party? Do they see it skewed in a particular direction or the breadth of the whole political spectrum campaigning? How polarized is their view? Finally, what happens to people when they face “weird” content? Usually it is not state actors directly running destabilizing disinformation campaigns using advertising, but rather domestic actors who aren’t very keen on showing who they are. Political parties mostly play by the same rules. If we’re talking about housing one day, the next party will be talking about housing the next day to criticize the other party. There’ll be this flow, whereas non-party actors will just throw unusual, destabilizing issues into the mix (without saying who they are, or who’s funding them). Since there are no guardrails in most democracies when it comes to money and advertising, we need to think hard about how we invest in the institution of electoral processes and democracy in order to build up trust.

Do you see an interaction between the tactics that you’re describing and disinformation as it’s being perceived by the public, politicians, and researchers? Where is the connection?

I worry about blurring the boundaries between disinformation and information that people just don’t like. This is a question of: What’s harmful versus what’s illegal? Where are the boundaries of free expression online? Everything is getting very blurred and mixed up, and runs the risks of being used for political advantage by different groups of people. What is it about social media that says that we need to get the far-right speech off platforms? It’s a really complicated question. There’s often this tendency, particularly with people who come into disinformation space from the campaigning side, saying “this shouldn’t be on the Internet” without a clear basis of criteria for “why not?”. It’s similar to what we see with political advertising: Some people argue that online political advertising should be banned entirely. Yet, there are  opportunities in digital campaigning that can be good for democracy. We should work out how to foster those while putting up sufficient guardrails to make it harder for people who want to abuse and misuse them. We need to get to a place where we’re much more concrete, understanding the probabilistic nature of a digital campaign. We’re never going to solve it perfectly. There is no consensus about the fundamental principles that should guide us in determining what people should be able to do in democracies.

Some legal scholars are skeptical regarding the approach of criminalizing fake news or censoring the Internet, arguing this could cause more harm than good. That it’s a slippery slope.

I agree. A lot of what we focus on is “volume.” I know that some free speech scholars consider this angle as well. The idea is that the Internet has just so much stuff on it, right? If you went back 25 years, you could pretty much model everyone’s media environment; pick the number of newspapers that would land on people’s desk in the morning, the TV channels, and radio stations. Today, the sheer volume different content that one is exposed to during a day is incomprehensible. There’s just too much content. Particularly with advertising, one of the things that we recommend is that there should be fewer ads. At the moment, the only accountability system is people like us, who do large-scale programmatic, quantitative analysis of online political advertisements.

Have you observed any specific trends that are related to the pandemic?

One place where the pandemic has clearly mattered, is in eliminating in-person campaigning in a lot of countries. The expectation was that in the absence of in-person campaigning, people would spend their money online. Indeed, in some of the 2020 elections, even more money was spent online than was expected because there were no door-to-door solicitations. The Biden campaign, for example, invested heavily in Facebook ads because the Democrats weren’t going to do their normal big mobilization work. There have been some interesting minor policy changes made by platforms. For example, very early on in the pandemic, Facebook banned adverts for masks, due to the huge proliferation of Chinese mask companies trying to get people to panic and buy masks, vastly inflating the prices. There’s no one who told them “you must ban” mask advertising and “now you must unban it,” but they have to make these judgment calls based on media pressure balanced with public health advice. It has been interesting to  try to understand whether or not platforms respond to events in a trustworthy way.

Regarding content-banning and the question of platform regulation, who determines what is allowed? Who sets the boundaries for platforms?

I certainly think that we’ve reached the phase where we’re now talking about what regulation might look like. It is now an accepted idea that the platforms are unregulated and that they need to be brought down in size. I’m always torn regarding the wider conversation about that. What are the right mechanisms? You get people talking about it from a privacy and a data-collection perspective. Other people talk about the monopoly and breaking up the companies. Some people talk about content regulation. We’ve tried to focus narrowly on political advertising, though it is actually a broad topic. There are a many things that governments could do. The problematic issue is the amount of money spent on political advertising. It’s very hard to persuade someone who just won an election, having spent a million dollars on Facebook,  that those million dollars shouldn’t have been spent in the first place and that we need new rules. There is some opportunity for government regulation, but at the same time, there’s a lot of room for platforms to innovate, to design safer environments, better tools and better integrity and security policies. It will most probably be a mixed approach in the end, but I would like to see government starting to take more of a lead.

Can you elaborate on what regulations platforms can introduce? What type of regulation is missing? Do you see the situation changing? What would it take for politicians to be more proactive on this matter?

The types of policies that we’re in favor of are strong “know your customer” checks for political advertising. The question of who is buying substantial sums of political advertising on a platform should be verifiable and auditable. Governments should worry about someone showing up with a million dollars and spending it very quickly. Regulators should be concerned about that happening and try and create a safeguard against random actors. Our view is that you should ban social features overall from paid advertising and restore it back to the accountability system that democracies have established over hundreds of years. In addition to taking off social features, we suggest reducing the number of consecutive ads that people can run, adding limits on some types of targeting and providing more transparency. Specific transparency reporting is needed about moderation processes, and data standards (i.e. what data should be published, how different types of organizations can get access to that data, and what they can do with it). These feel like measures that the platforms are unlikely to put forward themselves and therefore some government initiative is necessary.

Currently, it seems that two camps have emerged, with the platforms on one side (who are in a defensive position about how to manage the PR around response to the criticisms), and their critics on the other side, ranging from mildly critical to “burn it all down,” “jail the executives,” and the like. As of now, I’m not sure that dialogue between the two sides is possible. Nevertheless, the scenario where Facebook will not exist two years from now seems very unlikely, so dialogue is vital. There needs to be some middle path that at least tries to say, “Look, we have ideas too about how you might make your platforms better” and then for the platforms to say: “We are listening in a reasonably transparent way to those ideas, we have tried some of them in the past, they didn’t work for these reasons, we would like to try these ones, can you help us?” This would require agreeing that no one has a monopoly on wisdom in this space, and that we all need to find a way of moving forward. I do increasingly worry that the whole thing is so broken that basically it’s a Twitter argument between people who work for platforms and people who don’t work for them, which is not a conducive environment to advance viable solutions. I am not sure how, or who could bridge this gap between the parties, but it seems to me that there needs to be some mediation efforts. It is important to deescalate and to be careful not to drive the debate into a deadlock.


This Interview is published as part of the Media and Democracy in the Digital Age platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich Böll Foundation.

The opinions expressed in this text are solely that of the author/s and/or interviewee/s and do not necessarily reflect the views of the Heinrich Böll Foundation and/or of the Israel Public Policy Institute (IPPI).

Spread the word

Share this Post

Read More