What is the role of trust in Germany's AI Strategy?

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

Share this Post

In our everyday use of the term, we describe trust as a feeling or an emotion. For some, the only way we can reasonably speak about trust is in the context of interpersonal relationships. Others use trust in broader contexts as well, as in trust towards organizations, corporations, specific processes, and even more abstract concepts. In this sense, we might utter the following sentence: ‘I don’t trust AI technology.’ Beyond our everyday use of the term, trust has become a topic for those working on AI and in particular among policy makers and those interested in shaping policy. Given this development, it is crucial to trace various uses of the term and the implications for policy and implementation. Most commonly, this interest in trust and AI builds on the assumption that a lack of trust is an obstacle to reaping the full social and economic benefits of AI. Hence, there is a focus on increasing trust through various means.

In the academic literature, trust and AI are investigated from a variety of perspectives. Trust is ​often associated with specific attributes of machine learning systems. For example, some scholars focus on defining key principles such fairness, explainability, auditability, and safety. This line of investigation is closely associated with questions around regulation of AI. The work of others focuses on fostering trust by carefully designing the way humans interact with AI applications, for example, in the context of service delivery. Yet others have made suggestions on how to increase consumer trust and the steps that can be taken by service providers and suppliers, such as declarations of conformity with specific guidelines and principles.

There are also broader debates regarding the relevance of trust to AI. Some of these are more theoretical, such as the question of whether the concept of trust can be invoked in the context of AI and what kind of trust this would imply. Others, however, may pertain more directly to policy, helping to conceptualize the terms of warranted and unwarranted contractual trust in the context of AI. It is clear from this very brief overview that the topic of trust and AI opens up a number of avenues of investigations. There is a need for more conceptual but also further practical reflections. Some of the questions raised go to the very core of what it means to be human.

With this background in mind, the present paper is interested in tracing the relevance of trust in AI in specific policy debates. As the brief summary of the academic literature alluded, there is a plurality of perspectives of what trust is and what it means. It seems that the best response in the face of this plurality is to contextualize each debate and carefully trace the changing meaning of trust therein. This paper focuses on the German debate on AI and, in particular, the national AI strategy and subsequent and related documents. What is noticeable from the outset is that trust seems to function as a bridge between innovation and realizing the benefits of AI on the one hand and regulation and standards on the other hand. Using the German AI debate as a case study, this paper aims to shed further light on how the seemingly elusive concept of ‘trust in AI’ is (a) used as a rhetorical device to navigate the tension between innovation and regulation and (b) translated into practical suggestions. Before taking a closer look at the specific German debate, it is worth putting the policy debate on AI and trust into an international context.

International Principles: The Paris Call and the EU’s Ethical Guidelines

Looking at some recent examples, we find that trust has become a prominent lens for framing international debates on digital technology. Two cases, the Paris Call and the EU’s Ethics Guidelines for Trustworthy Artificial Intelligence, stand out.

The Paris Call for Trust and Security in Cyberspace calls on “a wide variety of actors, in their respective roles, to improve trust, security and stability in cyberspace.”  Introducing the Paris Call in his speech at the 2018 Internet Governance Forum, French President Macron summarized the idea as a need for “consolidating trust in the Internet,” which included “trust in the protection of privacy, trust in the legality and quality of content, and trust in the network itself.” The call garnered immediate support from hundreds of signatories, among them 51 states, including Germany, and leading tech companies. After an initial absence, the USA joined the Paris Call in November 2021. China and Russia remain absent. The Paris Call is largely symbolic and not legally binding on signatories. If we look at its nine principles, trust is not directly mentioned. From the principles, however, trust can be inferred as an outcome of the Paris Call. A closer look further reveals that the principles touch upon the trust between a variety of actors. For example, Principle Nine, on international norms, fosters trust-building between states. Principle Eight, addresses non-state actors, including the business sector. Other principles, including measures such as protecting infrastructure and strengthening the security of digital products throughout their lifecycles, have implications for decisions by companies and the consumers of digital products and services.

Trust makes a more prominent appearance in the EU’s Ethics Guidelines for Trustworthy Artificial Intelligence, published in April 2019. Prepared by the high-level expert group (HLEG) set up by the European Commission in 2018, the guidelines “aim to promote trustworthy AI.” The HLEG coined the term “trustworthy AI” and the guidelines specify its three components, stipulating that trustworthy AI must be lawful, ethical, and robust. This is then further broken down into seven specific requirements: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; (6) environmental and societal well-being; and (7) accountability. The guidelines were followed by an Assessment List on Trustworthy Artificial Intelligence (ALTAI), which was designed as a self-assessment tool for developers and deployers of AI applications. Following the EU guidelines, the term “trustworthy AI” has found its way into principles proposed by other international organizations such as the OECD Principles on AI of May 2019. The AI principles adopted by the G20 in June 2019 also center around trustworthy AI. The concept of trustworthy AI has also been taken on board by the business sector, for example by IBM and Deloitte.

The German AI Strategy and its Follow-Up

Germany published its Artificial Intelligence Strategy in November 2018. The strategy aims to “safeguard Germany’s outstanding position as a research center, to build up the competitiveness of German industry, and to promote the many ways to use AI in all parts of society in order to achieve tangible progress in society in the interest of its citizens.” This is also expressed in the idea of establishing “AI made in Germany” as a globally recognized brand.

With its 12 fields of action, the strategy covers a broad range of topics, including strengthening research, supporting small and medium enterprises, safeguarding work and the labor market, using AI in public administration, making data available and facilitating its use, developing a regulatory framework, setting standards, engaging in national and international networking, and fostering societal dialogue. In terms of breadth and areas of focus, it shares many characteristics with other national AI strategies.

Trust is relevant for the 2018 strategy, but it is not a prominent frame of reference given its absence from any headings and the lack of a sustained treatment of the topic. The particular instances in which trust can be located within the strategy are, however, instructive. Trust is mentioned in the fields of action concerning research, labor, data, regulation, and national and international networking. The ways in which trust appears in these instances can best be summarized by looking at the ”who?,” “why?,” and “how?”. In answer to the question as to whose trust is important, the strategy puts emphasis on the general public, users of AI applications, workers, companies, and civil society. With regard to “why” the strategy argues that trust among these groups is important in order to successfully use the technology, encourage the private sector to share data, create acceptance within the workforce, and, ultimately, secure a competitive advantage for Germany and Europe. And looking at the ‘”how”, this is to be achieved by appropriate standards and regulation (including data protection and privacy), researching novel ways of pseudonymization and anonymization of data, establishing trustworthy data- and analysis-infrastructure, ensuring active participation of citizens in general and in particular, for workers through work councils, making AI explainable, accountable, transparent, and verifiable, and “a comprehensive, nation-wide information and policy campaign.”

The strategy was followed up by an interim report, recommendations from the Data Ethics Commission and the Commission on Artificial Intelligence. The interim report highlights concrete proposals and initiatives in relation to the 12 fields of action. An update to the strategy was published at the end of 2020. Trust takes a more prominent role in this update, and it is now explicitly aligned with the “AI made in Germany” brand. The word “trust” is mentioned twice as often as in the original report, including in a sub-heading, “underlying conditions for safe and trustworthy AI applications,” in the section on regulation. The update also fleshes out the importance of trust and how to achieve it in greater detail. Trust is mentioned in the context of reducing user and investor uncertainty, increasing investment security, increasing legal certainty for companies, promoting innovation and competition, bridging the private sector and science, protecting the rights of citizens, and addressing reservations and concerns of the general population. Regarding regulation, alignment with the EU’s trustworthy AI principles is explicitly mentioned. The annex contains a substantive list of “next steps in the implementation,” many of which have the potential to increase trust. The update to the original strategy adds a more specific discussion on trust and defines more clearly the ”who?,” “why?,” and “how?” of trust. One of the most prominent areas in which trust is shaped, according to the strategic update, is regulation and standardization.

Following from this, it is instructive to look at the recent suggestions on standardization. The German Standardization Roadmap on AI was launched in November 2020. Trust plays a prominent role in framing the roadmap. On a practical level, the roadmap identifies 70 standardization needs. It acknowledges that a number of existing standards apply and suggests creating “an overarching ‘umbrella standard’ that bundles existing standards and test procedures for IT systems and supplements them with AI aspects.” Most interesting in the context of the discussion of trust and AI is the suggestion of a certification program, which at its core has ”​​reproducible and standardized test procedures with which properties of AI systems such as reliability, robustness, performance and functional safety can be tested and statements about trustworthiness made.” The roadmap calls this certification program “Trusted AI.” The hope is to substantiate declarations of trust by developing test methods and standards to evaluate the properties of AI systems but also “the measures taken by organizations providing AI systems.” In other words, this would be both a product test and a management test. According to the roadmap, the hope is to be the first in the world to develop a certification program that achieves international recognition.

Certified Trust?

As discussed at the beginning of this paper, a number of discourses assume that trust will play an important role in the future development of AI. As we have seen, trust appears in the German AI strategy and its related documents, which offer specific suggestions for increasing trust related to infrastructure, data, and dialogue between stakeholders.

One of the most interesting proposals from the German debate is the “Trusted AI” certification program. Such a program works well to support the “AI made in Germany” brand. It also addresses some of the concerns raised in the German AI strategy regarding trust among consumers, investors, and businesses. The planned program clearly steers the conversation on trust towards practical measures to evaluate AI systems and those managing and implementing them. In this sense, a certificate could be seen as a measure of trustworthiness that helps us to quantify what we mean when we talk about trust in the context of AI. The overall impression is that the debate is occupied with balancing the tension between innovation and regulation. Regulation and the proposed certification program, specifically, can be read as an attempt to strengthen the value of AI products and services while respecting values and ethical principles.

In short, the German approach operationalizes trust in terms of regulation and standards and in particular through the suggestion of a certification program. This, however, makes me wonder whether we need to bring trust into this conversation at all. Concepts such as responsibility, accountability, and safety seem to function in a similar way without bringing in allusions to emotions and interpersonal relations that the concept of trust tends to carry. After all, the CE certificate on a toaster is relevant for me to make consumer decisions, but there is no need to talk about trusting my toaster. The impression from this analysis of the German approach is that trust acts more as a rhetorical and framing device. As this discussion continues and as we move towards implementation of the standardization roadmap, we might also see a rhetorical shift from “trust in AI” towards “certified AI”.


This text is published as part of the German-Israeli Tech Policy Dialog Platform, a collaboration between the Israel Public Policy Institute (IPPI) and the Heinrich-Böll-Stiftung. 

The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Israel Public Policy Institute (IPPI) and/or the Heinrich-Böll-Stiftung.

Spread the word

Share this Post

Read More