Cross-Cultural Values defining AI Governing Principles

Photo »MaxGruber-Banana_ Plant _ Flask« by Max Gruber / Better images of AI - https://betterimagesofai.org/images?artist=MaxGruber&title=Banana%2FPlant%2FFlask

Share this Post

There is a consensus that AI needs to be organized around ethical principals and values. Like all iterations of technological progress that preceded it, artificial intelligence is no exception to the rule that technology itself is inherently neither bad or good. Rather, it is how technology is applied that groups it in one realm or the other. One can use nuclear fusion to produce electricity or an atomic bomb. During the Second World War, enormous efforts were undertaken to create a weapon of mass destruction that would devastate the enemy, its cities and environment. It is reported that all those who witnessed the test of the first nuclear bomb on July 16, 1945 were taken aback by the potential cruelty this new human creation could unravel. Yet defeat of the enemy, Japan in this case, was the paramount goal. The subjugation of Japan, a terribly aggressive colonial power, was seen as the ethically preferred outcome.

This well-known example leads directly to the question of what principles will govern the age of AI. Applications of this technology can be used for total surveillance, hindering the freedom of entire nations. The governing Communist Party of the People’s Republic of China uses all sorts of applications of AI to crack down on the Uyghur minority in Xinjiang province, for example. Up to one million humans are incarcerated in concentration camps against their will for what the CCP calls “re-education.” Those Uyghur people who are not held captive in a camp have to live in a giant open-air prison. Facial recognition devices help the regime oppress and degrade the twelve million Uyghurs in Xinjiang. Cameras in mosques record the intimate act of praying; following the religious practice of Islam is seen a first step towards engaging in terrorist activities in Beijing’s eyes.

The moral dilemma at hand is how to regulate AI technologies that could help to better organize life in mega-cities, provide security and help improve citizen and consumer experience, but that are vulnerable to being abused for horrendous crimes. At first glance, the obvious solution to this dilemma might be to simply not sell this technology to a dictatorship such as the People’s Republic of China. However, although this might help, it falls short of eradicating the problem at its root by creating, perhaps for the first time in human history, a technology that is inherently good, that is governed by ethical principals and whose qualification as a good technology does not depend on proper usage, but arises from its quality of being good in itself.

Technology for good and evil

The distinction between dictatorships and autocracies versus democratic, free societies helps to clarify the situation and determine the first step. The underlying normative principle of free societies is human dignity, a principle that is not achieved haphazardly but with an enshrined, and even encoded set of legal principals, namely, human rights, which are the foundation of all legal codes and a governing ideal for the rule of law. Constitutions guarantee that human rights, which include both civil and social components, are upheld and are not revoked or abolished by any parliamentary majority. Civil rights are inalienable rights that enable people to participate in the public sphere; they include freedom of expression, freedom of conscience, and freedom of religion, art, culture and, of course, the press. The right to an attorney and a fair trial are also civil rights,  as are universal suffrage and free elections. The social rights correspond to the civil rights: a society that is free (and by implication democratic) enables its citizenry to fully exercise its social rights. It is therefore paramount that this society provide basic social necessities, the most basic being free education and affordable health care for all.

When it comes to the universality of these principles, it is safe to say that today they are accepted, in theory, by all nations that signed the Universal Declaration of Human Rights. De facto, there are nations among them, including the People’s Republic of China, that do not uphold these principles, even though they have vowed to do so.

Let’s look into the ethical dilemmas democratic societies might face when it comes to the application of those human rights that, as I see it, should be the guiding principles to unlock the good and positive potential of AI.

By way of introduction, it should be noted that democratic countries today span all cultural and religious spheres of this planet: Beginning in the Far East, we note New Zealand and Australia,  followed by Japan, South Korea, Taiwan, India, and Mongolia – all democracies. The underlying cultures are diverse, as are the religious heritages that also inform peoples’ ethical decisions: Shintoism, Buddhism, Hinduism. Continuing westward, the member states of Europe are all constitutional democracies that share the heritage of Ancient Greece and Roman civilization, yet were divided for the longest time along Christian denominational lines and different ethical schools that derived from them or flourished under their influence. Lastly, all the Americas are nominally democratic, from Canada to Argentina. These democracies vary in their specifics, but all subscribe to the same set of values in principle.

Are there universal principles?

Similarities not withstanding, the devil is in the detail, namely, in ethical problems such as that highlighted by the Moral Machine an experiment compiled globally by researchers at MIT, whose task was to start a dialogue about the guiding principles of automated vehicles (AVs). One of the most imminent changes that AI will bring is that automated cars will sooner or later be appearing on the streets of our cities, villages and motorways, saving uncountable hours of human work and, potentially, minimizing the risks of accidents. The ethical question posed by the Moral Machine project is: “For whom would the automated car break?” Imagine a scenario where the AV runs into a life-death decision, a precarious moment where it either steers into a child that, left unattended, runs into the street, or runs into a group of senior citizens, possibly causing severe injuries in all of them, and perhaps even killing one. The underlying ethical question is “Should the AV prioritize lives of the young over the old”?

The Moral Machine’s question is a variant of  the trolley problem, which has been already extensively studied across cultures. In the problem, a train is heading perilously towards a group of workers who are on the tracks. The potential victims are not aware of the approaching train. The survey participant is presented with three possible reactions to this scenario. One option is to use the railway switch to redirect the train to another track, killing one worker on that track instead of the five on the original track. Another option is to throw a fat person onto the track who would stop the train, but die. The majority of those surveyed across cultures repudiated this option; at the same time many favored the switch solution.

The survey outcome varied depending on the cultural background of respondents. In Eastern cultures where age is regarded differently than in Western countries, the response was not as strikingly in favor of a younger person. The Moral Machine also introduced another category, namely, to steer the car into a wall, leaving everyone alive except oneself. The option of dying as a sacrifice for all the other traffic participants was not chosen by the vast majority of survey participants. Self-preservation transcends cultural difference, at least according to  the findings of the Moral Machine project. It is stronger than the instinct for preservation of one’s kind, conservatio sui generis, although this instinct is apparent in studies where one must decide how many of one’s own species should die. As the surveys based on the “trolley problem” illustrate, people tend prefer minimal “collateral damage” where possible. For the Moral Machine study, 70,000 people were surveyed, in ten different languages, in 42 countries. Although, the study finds, the nuances do vary, a general trend, which may reflect a universally held value, shines through.

Fighting the biases

If the present study, like previous studies, had not included respondents from different cultural backgrounds, this would have limited the data and narrowed down the possible outcome. The authors of the trolley survey therefore specify in their results that prior surveys, most of whose data was from Western countries (with relatively small samples from other parts of the world) might not have had a sufficiently diverse sample population to claim to have identified universal principles. The authors of the study refer to biases that are indeed the inherent enemy of the well-intended generation of data for purposes of a good and just application of AI. As has been shown in several instances, attempting to solve today’s problems with yesterday’s data perpetuates biases from the past. For example,  and using the results to determine prison sentences. Instead of relying on a presumed correlation between socio-economic conditions and recidivism, the algorithm is applied. Potentially, these algorithms could obviate social biases. However, this would require transparency. [JB1] At present, such algorithms acquired by the courts are “black boxes,” as the courts themselves do not really know how they work. Since the algorithms are created by private businesses, it is more than likely that they see humans as consumers rather than as citizens, and are not committed to upholding their constitutional protections against discrimination.  Therefore, while AI has the great potential of eradicating social injustices based on race, gender and the zip code one lives in, this is not guaranteed unless there is transparency.

The zip code is a topic in itself, as it has become another discriminatory tool in the unthinking creation of algorithms. Insurances and delivery services alike decide how to treat humans on the basis of where they live. Are you seen as trustworthy to receive ordered goods to your address based on the likelihood that you might not pay for them, or that the delivery might be stolen if placed outside if you are not at home? Will you be approved for car insurance if an algorithm detects a higher probability that the car might be stolen in your neighborhood? When implemented, such predictions have potentially far-reaching effects. They can cement social immobility in a given zip code, or act as a catalyst for gentrification, as the demographics in a given zip code will receive a socio-economic boost if algorithms make certain residential areas more attractive from people higher up on the socio-economic ladder.

A third example is dating apps. Although dating apps have the potential to cut across racial and socio-economic boundaries (i.e. they enable users to virtually encounter people they wouldn’t meet in person while circulating in their usual circles), they feature filters that invite users to enter the racial background of their desired partner, perpetuating stereotypes harbored over a lifetime, often beginning in childhood. Some dating apps do not collect data on race of their users, yet the algorithm learns for whom users swipe right or left, exposing a racial preference over time. The algorithm thereby learns to mimic the racial bias of its human user-creator. Again, this is mostly due to the economic metric underlying the algorithm’s creation; the app’s profit is based on how successful it is in fulfilling its aim to enable matches users. It is certainly not the perceived duty of the algorithm to draw attention to the user’s racial biases.[JB2]

There is a growing sense of responsibility in the tech community to preempt the pernicious application of algorithms. However, since coders themselves are not free of biases, and even tech companies may fall prey to hidden stereotypes and biases even in the recruiting process, there  is a long way to go. While many in Silicon Valley, when asked, said they would subscribe to a constitutional prerogative, meaning that algorithms should by no means discriminate against humans based on their gender, age, ethnicity, and sexual orientation, rights all protected by the constitutions of democratic regimes, a lot of un-learning needs to take place in the real world and in the realm of coding in order to approximate this ideal.

Solving the citizen-consumer problem

The solution to the citizen-consumer problem, as I would like to call it, is to disentangle the two realms from one another, in order to give the constitutionally enshrined values the upper hand and thereby end biases as an unwanted result of today’s applications of AI. In effect, this would mean todesign algorithms to “unlearn,” in a manner similar to how social justice movements such as BlackLivesMatter and #metoo demand this of real human beings. The metric by which algorithms operate today is to narrow down or, in the terminology of their creators,  optimize and improve their results, by actively or subconsciously giving way to and perpetuating biases and prejudices. Algorithms would have to learn how to re-implement diversity and reintroduce factors ruled out or relinquished by force-of-habit by the user in their search behavior, whether purposefully or unwittingly.

A field where there is already some experience in this matter is the realm of media and journalism: The consumer metric, displaying articles that only favor, underline and perpetuate one’s world view, have led to “opinion silos,” “fake news” assertions and the claim to “alternative facts,” magnifying the polarization within democratic societies. In the case of Donald Trump, some would argue, fake news led the United States to the brink of collapse. Even after Joe Biden was elected as Trump’s successor, the two camps in the US Congress continue to fail to find common ground. This proved fatal in a global pandemic, where adhering to scientific facts saves lies. Conflict in healthy democracies arises over differences of opinion regarding policies that are implemented on the basis of undeniable facts. If facts are rejected on non-rational, religious or ideological grounds, a society is not fit to live with such a elevated technology such as AI. In such a case, the technology is indeed much smarter than those who use it, and is completely vulnerable to potential of abuse at the hands of the chosen few who understand the metric of the consumerist internet and use it to their advantage. The Cambridge Analytica scandal was exactly that: The masterminds behind it understood that they could bend the will of the electorate according to their prejudices, based on current acceptable uses of AI for political advertising. There is a correlation between these digital developments and the rise in hate crimes in the United States. Technology, once more, was used for bad, not for good.

To salvage the good potential of AI, it is now paramount to make a sharp about-face to give the citizen absolute priority over the consumer. Human rights have civic and social components; consumerist considerations have no place among them. Platform companies such as Google, Facebook and Twitter have just begun to understand their responsibility in steering this process. Understanding intellectually is one thing; starting to change business models, and thereby, the discourse, is another. The right algorithms matter. The necessary work has just begun.


This Spotlight is published as part of the German-Israeli Tech Policy Dialog Platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich Böll Foundation.

The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Israel Public Policy Institute (IPPI) and/or the Heinrich Böll Foundation.

Spread the word

Share this Post

Read More