On the Added Value of Big Data for Public Policy: Ex-ante AI Regulation
On the Added Value of ‘Big Data’ for Public Policy – and why it is ok to have ex-ante regulation for artificial intelligence
A myriad of complex questions
When decision-makers are confronted with complex conflicts and urgent crises, they want to be perceived as swift, decisive and competent responders. Whether a pressing issue affects national security or public health, frequently the response is associated with the development of new, or rapid mass-adoption of existing digital technology. This ‘digital solutionism’ is perfectly illustrated by policy responses to the current pandemic. The increasing use and repurposing of surveillance assemblages such as CCTV systems, telecommunications and internet use data, digital sensors, as well as drones is happening quickly and seemingly everywhere. Data revealing the location of groups and individuals generated by telecommunication providers and Internet corporations has become highly desirable. Apps to digitally trace the spread of COVID-19 have been developed and deployed all across the world in 2020. Finally, also the design of some social credit scoring systems in China was about to be adjusted in response to the public health emergency.
However, while decision-makers might be perceived as ‘finally doing something’ when adopting such measures, this does not mean that these are desirable or effective in the medium to long-term. Immediately concerns about human rights (and privacy in particular) emerge. While these are valid, the strong focus on data corresponds with a shift of power that might even be more concerning. As more data-driven measures are being adopted by governments across the world, the dependency on digital ecosystems transfers agency to those who control them. This undermines individual and collective sovereignty, which raises urgent questions around the added value of Big Data for public policy, purpose, legitimacy, as well as the ultimate impact.
Knowledge is power
Certainly, the emergence of ‘Big Data’ and more advanced data analytics has created new possibilities for the mapping and analysis of complex situations, as well as the development of public policy. More recently the mere analysis and reporting on findings has been augmented with the possibility to feed the large amounts of almost instantly available information to systems which can be trained to recognize patterns, translate these into predictions, or even autonomously react to situations (‘artificial intelligence’ or AI). An example for the use of such systems based on the training of algorithms is Predictive Policing, for which historic crime data is being combined with other potentially relevant factors (e.g. weather, average income in an area, street grid patterns, etc.) to predict the likelihood of crimes for a specific area.
While such ‘insights’ seem useful to decision-makers in law enforcement and across all sectors of public policy, broad consensus on the appropriate and legitimate data basis for their creation is still absent. For more than a decade Predictive Policing has been subject to comprehensive criticism relating to a whole range of ethical, legal and social issues, including concerns around bias and discrimination implicitly embedded in the results that the system produces. Looking at the use of Big Data to inform public policy more broadly, scandals such as Cambridge Analytica have made large scale data analysis in political context scary for many. Rather than anything else, the use of Big Data seems to transform formerly independent citizens into will-less objects of ‘big nudging’. It is very simple: a democracy cannot function if it cannot be assumed that voters make up their minds freely and independently.
Why it is ok to have ex-ante regulation for AI
Hence, some argue that there is an urgent need for regulation which is reigning in the use of Big Data and AI. Based on ethical guidelines, the EU is currently working on a legislative proposal to regulate AI and autonomous systems. Nevertheless, a recent report by German civil society organization Algorithmwatch also highlights how fast the adoption of such data-driven systems proceeds in many European countries despite the lacking framework. Therefore, also within Europe, there are those who demand for technology to develop freely and independently. Proponents of this direction argue that the use of autonomous systems leads to insights that otherwise take humans much longer to get to if they are humanly comprehensible at all. Furthermore, the economic potential of new services based on the technologies should be brought to fruition as quickly as possible.
Ultimately however, in order to argue for ex-ante regulation of AI and the use of Big Data in public policy one does not need to pick one of the sides. More than twenty years ago, in a time where the digital sphere seemed to be less influenced by exorbitant political and economic interests, the (by comparison) innocent internet already had a lively governance dynamic in place. It was described through a framework consisting of code/architecture, the market, as well as law and moral/ethical norms. Certainly, these dynamics were different from traditional governance structures used for territory, but up until today this basic description of dynamics remains valid. Hence, the question is not if governance mechanisms as such are in place. It is rather how implicit or explicit they are, and how much they need to respond to and empower a broad set of stakeholders sustainably, which is particularly vital for democracies.
Moving from narratives to performance indicators
One of the most striking aspects about the rapid adoption of autonomous systems is the lack of a clearly predefined set of performance indicators to evaluate them as they evolve. Turning one more time to Predictive Policing, it has never been statistically proven that the use of such systems significantly lowers crime rates. Another valid example to support this argument are digital contact tracing apps, which consistently fail to make meaningful contributions to the limitation of COVID-19 infection rates. Rather than being based on comprehensive considerations around purpose, legitimacy, as well as ultimate impact, the adoption of these measures turns out to be largely based on narratives. Quite ironically, this transforms the use of Big Data in public policy making from a source of evidence-based decision-making to a tool of advanced storytelling. It is not inconceivable that this will change in the future. Yet in order to make this transition, it might be necessary to change strategy and to holistically embrace the diversity and complexity of society, rather than to reduce it to overly simplified models which fail to stand the tests of time.
The Israel Public Policy Institute (IPPI) serves as a platform for exchange of ideas, knowledge and research among policy experts, researchers, and scholars. The opinions expressed in the publications on the IPPI website are solely that of the authors and do not necessarily reflect the views of IPPI.
The new German government’s plans for accelerating the Energiewende
Germany has ambitious targets to drastically cut its greenhouse gas emissions in the coming years and to achieve…
Automated Decision-Making: A Hidden Blessing For Uncovering Systemic Bias
As automated analysis of data (via a suite of tools known as machine learning) becomes more prominent, examples…