Beyond Regulation: a new approach to dealing with digital disruption

Illustration. Image by todaytesting.com (CC BY 4.0) - https://todaytesting.com/free-social-media-marketing-free-images/

Share this Post

“Citizens are speaking to their governments using twenty-first-century technologies, governments are listening on twentieth-century technology and providing nineteenth-century solutions,” Secretary Madeleine K. Albright said in reference to challenges posed by computational propaganda. Governments around the world find themselves ill equipped to address the complexities of regulating the tech industry, let alone understand, control and predict the societal impacts of technology and the consequent threat it poses to democratic processes.

Recent elections in the United States, France, Brazil, and other countries unveiled how data breaches, dark ads, uncensored inflammatory speech and disinformation can be used to tilt the balance towards one side of the political spectrum and more often than not, towards radical factions. In response to this trend, governments have accelerated regulatory plans for tech companies: The French government recently imposed a digital tax to tech giants, accelerating the plan being discussed at the EU level to tax tech giants according to the countries where they generate revenues rather than where they are based. Germany’s Network Enforcement law imposes fines to such companies if they fail to comply with the requirements about hate speech on the country’s criminal code.

Facing increased pressure from governments, tech companies also offered self-regulatory measures having realized that taking control of and guiding the regulation debate according to their interests is far more advantageous than waiting for governments to build their own capacities to do so. The efforts of Facebook to disclose information about who pays for political ads and why they target specific users followed the scandal of dark ads during the 2016 US elections. Twitter took action to shut down millions of fake accounts.

While tech companies’ self-regulatory measures have improved certain aspects of social media platforms, they fail to address the roots of the problem entrenched in their business models. Reducing the exposure of users to harmful content is only incentivized to the extent that such changes do not affect how long users stay on their platforms. After all, the time users are online  exposed to ads is at the core of the tech business model, and the gratification users have while using such platforms is directly related to the likeability of the content they are exposed to. Algorithms successfully select pleasurable content to keep people on the platform, with the side-effect of reinforcing their political biases.

 

Wait, let us fix that mess.

Shortly after the 2016 US election results, Mark Zuckerberg stated that more than 99% of content people saw on Facebook leading up to the elections was “authentic” and people were exposed to only a very small amount of fake news and hoaxes. Studies proved him wrong, and the Cambridge Analytica scandal forced tech companies to offer quick solutions to avoid their products from being used for manipulation purposes.

As an outcome of such scandals, companies began to exert greater control over ads running on their platforms and issued transparency reports as a way to increase accountability for content that violates community standards. Facebook tweaked its content prioritization algorithm to value social interactions over content shared by pages and groups in a step to reduce the exposure to false content but did not offer a sustainable solution to the problem, since this measure intensified the financial model crisis of reliable media sources, which rely on social media to get clicks and readers. The new set up not only harmed pages that spread fake news, but also credible newspapers. Additionally, by focusing too much on Facebook, the company’s efforts fell short when it came to controlling the use of its other applications – Instagram and WhatsApp – for spreading disinformation in recent elections.  

Between governments and tech companies we have a clear case of information asymmetry when it comes to dealing with the challenges technology products pose to society. Tech companies are indeed better positioned to offer solutions because they have the technical capacity and knowledge to do so. However, governments are charged with holding public interest at heart, despite lacking know-how. The self-regulatory movement changed the behavior of tech companies towards state regulation, as they started to understand that regulation would come whether they like it or not. The only way to avoid an impact on their business models would be to take control of the debate and shape regulation according to their interests.

Ok, we need your help. But let us tell you how to do this…

With regards to the regulation of tech companies, it is possible to observe two main areas. One relates to material rules such as tax revenues, data protection, ads transparency and privacy standards. The other applies to behavioral aspects of users—namely their political behavior—in relation to the content shared on such platforms including less tangible questions such as hate speech, political polarization and misinformation.

Advances have been made in legislation dealing with questions of the first area. Espionage scandals and several breaches of users’ data helped increase awareness and several countries approved data protection laws, most prominently the GDPR in the European Union. With the #GiletsJaunes movement forcing concessions, the Macron administration recently imposed a tax on tech companies operating in France, anticipating the discussion of a digital tax proposal at the European level.

It is harder to offer feasible regulatory solutions with regard to user behavior. The Macron administration introduced a law outsourcing the control over the spread of misinformation during elections to the courts, which failed to pass the Senate in 2018. A similar failed approach in Italy tried to impose fines to users who shared false information online. So far, the only measure in place is Germany’s network enforcement act, in force since 2017, which holds social media platforms accountable for the content they facilitate, demanding hate speech content to be taken down according to the country’s criminal code. Lacking official evaluation, the effectiveness of the law is dubious—the transparency reports issued to date do not assess the overall impact of hate speech, but rather bring improvements the company was already doing without the law.

Zuckerberg suggested that he would welcome state regulation when it comes to ad transparency in the same way it is applied to TV and print media outlets–an area where Facebook platforms indeed improved since dark ads usage in 2016—a strategy that allowed campaigns to micro target voters according to their specific interests based on personal data. However, guiding the regulation debate towards questions that are already discussed does not add any value to the role of the state in that matter. Even though it is clear that the most effective short-term solutions come from the tech companies themselves through self-regulation, the role of the state should not be neglected solely based on the information asymmetry argument.

What should be the role of the state?

The state often implements more effective instruments with regard to data protection and tax laws, which demand only a basic understanding of how user information is handled or revenues are generated. However, governments have yet to successfully understand and address the impact on users who are exposed to harmful content, polarization or misinformation online. Criminalizing users, litigating the spread of false information or imposing fines have been at the core of proposed legislations and even though they put pressure on social media companies to act effectively on the question, they do not present an innovative or sustainable solution to the challenges of dealing with the cognitive and behavioral influence on users’ political beliefs as a result of using social media.

Tech companies believe that if they appear negligent about removing dangerous content from their platforms, regulators—who barely understand how their platforms function—will eventually step in with bad regulation. Even though the scientific community expects tech companies to adopt a more responsible stance regarding the technology they develop, it is still necessary to enhance the public sector’s capacity to understand the challenges new technologies pose and to regulate them accordingly. Technological developments will continue to challenge the political landscape of the world, be it through the automation of labor markets or by influencing users’ behavior in many different ways. If governments do not step in, their institutions will fast be rendered an outdated structure to deal with future challenges.

The state needs to update its governing structures and mechanisms in order to address the disruptive social impacts of technological innovation. Regulatory instruments supervising the finance sector could provide a blueprint for initiating appropriate governmental regulation over the tech sector, including the capacity to perform algorithmic auditing and identification of false accounts that artificially boost content. It is possible to find other instruments to achieve the goal of making tech innovators more accountable to the platforms and products they develop, but regulators need to take action if they want to have a seat at the table when it comes to shaping the future of technological development.

It is fair for the tech industry to offer ways in which regulation can help them address the social problems posed by their technology, however they should not guide the debate just because they hold comparative information advantage. Knowing more than governments does not mean that tech giants are positioned to offer better solutions—on the contrary, the apparent conflict of interest only incentives them to offer solutions they deem to be better. A government body that acts as a technological regulator needs to audit the actions of such companies. Getting rid of the information asymmetry between such companies and elected government bodies is the first step to shape twenty-first century solutions for twenty-first century challenges.

Israel Public Policy Institute (IPPI) serves as a platform for exchange of ideas, knowledge and research among policy experts, researchers, and scholars. The opinions expressed in the publications on the IPPI website are solely that of the authors and do not necessarily reflect the views of IPPI.

Spread the word

Share this Post

Read More