What are Recommender Systems?

Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0 - https://betterimagesofai.org/images?artist=PhilippSchmitt&title=Dataflock(faces)

Share this Post

Authors: Eduardo Magrani and Paula Guedes Fernandes da Silva

In the current data-driven economy, Recommender Systems (RS), developed from artificial intelligence techniques, are evolving fast and becoming a widespread tool. These systems perform information filtering to optimize the online experience of users by recommending personalized content, products or services based on the processing of their personal data regarding interests, needs, online behaviors and characteristics.[i]

Uses, benefits and risks related to recommendation systems

Although most people are unfamiliar with the term “recommender systems,”  these systems already influence our daily lives,  through e-commerce push advertising, suggestions for new “friends” and content on social media,  suggestions for songs, series and movies on streaming platforms, proposed suitable matches on dating apps and even sensitive advice concerning wellbeing in health care apps.

While it is certain that in the era of information overload RS play an important role by helping individuals in the filtering of useful and adequate content, optimizing time and organizing information online, they have another side. Due to their pivotal impact on users seeking information online, these systems give tremendous power to those who control them, especially because of the potential abuse of the enormous economic and political gatekeeping functions that are performed by recommender algorithms.

Despite the benefits and convenience introduced by these systems, then, they also pose ethical issues and significant risks to human rights, since they may recommend biased items, interfere with individual self-determination and autonomy, are sometimes based on unlawful data processing that harms privacy and data protection, and are even sometimes developed in breach of transparency and explicability principles.

Attempts to regulate recommender systems: A view from the EU

Due to the rapid implementation of Recommender Systems and other AI tools in different sectors, both by public or private entities, the European Union (EU) has begun to analyze possible means of controlling and limiting the technology in order to make it function for the benefit of society and not against it, and especially, to protect vulnerable groups. Initially, the regulation was formulated  based on ethical principles, guidelines and opinions on the development and use of AI, such as the 2019 Ethical Guidelines for Trustworthy AI by the Independent High-Level Expert Group on Artificial Intelligence, and the European Commission’s White Paper on AI of February 2020.

With these strong basic principles and guidelines applicable to AI established, the European Union is now trying to implement binding legal rules specifically applied to this technology, in addition to the already applicable data protection legislation, the main example of which is the General Data Protection Regulation. Thus, we have recently seen some concrete examples of the creation of legislation targeting Artificial Intelligence, the main examples of which are the  Digital Services Act (DSA), which finally was approved by the European Parliament and the Member States, and, more specifically, the Artificial Intelligence Act (AIA) proposal.

Digital Services Act

While the DSA is a broad act covering many legal issues generated by the digital revolution, especially online platforms, recital 62 and articles 2(o) and 29 of the Regulation directly address recommender systems provided by online platforms focusing on the information (clear, comprehensive and accessible) and options that these platforms must provide users in order to enable them to influence the recommendations.

As recommender systems have a significant impact on people’s behavior and how they interact with one another and find information online, the DSA’s relevant articles aim to enhance the GDPR’s rules relating to user control over personal data, with the aim of empowering users through increased information and choice.. For example, the proposal requires very large RS platforms to conduct risk assessments, and to explain in their terms and conditions the parameters of the systems as well as the options available to users to modify or influence these algorithmic metrics and parameters, including the option to not be subjected to profiling.

However, there is still room for improving the DSA. For example, it is still only applicable to “very large online platforms” and lacks concrete solutions and enforceable obligations, creating a false sense of transparency and control. Moreover, Article 29 is vague in terms of the specific nature of user options, since it only requires that options be in accordance with “users’ preferences,” which may reinforce polarization and lack of diversity online.

Artificial Intelligence Act Proposal

The EU already has important regulations applicable to Artificial Intelligence, which provide some level of protection. However, the existing legislation is insufficient to address all the challenges that the technology may create. In April 2021, the European Commission therefore proposed the first legal regulation specifically targeting AI, which aims to provide AI developers, deployers and users with clear requirements and obligations regarding the technology in order to both encourage innovation and protect potentially threatened fundamental rights and freedoms.

The proposal takes a risk-based approach, addressing risks specifically created by AI applications and categorizing them according to whether they are considered unacceptable, high, limited or minimal, and whether they affect personal safety, health or fundamental rights. Although most AI systems existing today are considered of limited or minimal risk and useful to society, based on recital 14, it may be necessary to prohibit some AI practices, impose requirements for high-risk AI techniques and obligations for its operators, and/or institute transparency obligations to certain AI systems, depending on the intensity and the scope of the risks a given AI system may generate,.

In contrast to the DSA, the AI Act proposal (henceforth: AIA proposal) does not specifically address recommendation systems, but it will inevitably apply the DSA’s tools, as they are based on Artificial Intelligence, and the “generation of recommendations” is covered by the Proposal’s definition of AI in Article 3 (1) as one of AI’s functionalities.[ii] Consequently, it is possible that recommendation systems will be subject to differential treatment under  the AI Act according to level of risk (based on the four levels specified above) that they may generate in the specific case.

That said, RS with no or only minimal associated risk will still be able to be freely developed and used, with no limits or only very limited transparency obligations, for example, in the form of a requirement to provide information to flag the use of an AI system when interacting with humans.

It is important to note that even if an RS based on AI is classified as minimal or no-risk, and thus free of obligations or subjected to few requirements under the AIA Proposal, other EU regulations, such as the General Data Protection Regulation, Directive 2002/58/EC on Privacy and Electronic Communications and finally, the DSA, continue to apply.[iii] On the other hand, considering the potential manipulative uses of some RS, the AIA Proposal allows for prohibiting them when developed with “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior[iv] or when they “exploit[s] any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behavior of a person pertaining to that group[v] in a way that causes or is likely to cause physical or psychological harm. Thus, it is plausible that recommendation systems used exclusively for children would be an example of a prohibited AI application.

Criteria for high-risk AI are defined in Article 6 and complemented by a non-exhaustive list of high-risk applications in Annex III. In this context, there is a significant chance that recommendation systems may be classified as high risk to the health, safety or fundamental rights of individuals, according to the criteria of the AIA proposal, as they may affect the right to privacy, data protection, non-discrimination, or autonomy, and may manipulate individuals’ behavior and even negatively influence democratic processes. In such cases, the high-risk recommender systems would be subject to a series of mandatory requirements for trustworthy AI before being released to the market or put into service.

Such obligations might include appropriate data governance (Article 10), elaboration of adequate risk management and mitigation systems (Article 9), technical documentation (Article 11), appropriate human oversight (Article 14) and provision of clear and adequate information to users (Article 13) –  RS also would be subjected to enforcement after such controls are already in use. These ex-ante requirements, pertaining to transparency and risk-assessment, would create an incentive for RS providers to promote compliance-by-design in the case of high-risk recommender systems. In addition to  enforcement of requirements for high-risk systems, the AIA Proposal imposes predictable, proportionate, and clear obligations on RS providers and users to assure compliance with already existing legislation protecting fundamental rights throughout all RS lifecycles.

Although the proposal, the first regulation of its kind (i.e. specifically directed at AI), has some impeccable qualities, such as serving as an international inspiration, there are still shortcomings that need to be addressed, such as the use of vague terms, loopholes created by different possible interpretations concerning the risk-based approach classification, and the  current limited recourse options for those affected by AI systems to challenge their harmful outcomes. For instance, if an RS has substantial effects on people’s lives, not only must it adhere to standards of transparency concerning the implementation of the system, but it must also offer avenues for its recommendations and decisions to be challenged. Namely, there must be legal and easily accessible options for affected people to question the recommendations given by the system and, if necessary, to demand reversal of the recommended decision, its reconsideration through a different procedure, or even compensation for potential damages caused. Mere technological solutions are insufficient to ensure that AI systems are used to the benefit of the many, not the few. Empowering those directly affected by such systems, which has happened to some extent thanks to the DSA accountability frameworks, is an important aspect in this AI context.

Concluding overview

In summary, there is a current trend in the area of regulation of AI systems, such as recommendation systems. The initial approach of drawing up recommendations and guiding principles is now evolving towards  the formulation of binding legislative acts, as in the example of a number of EU legal initiatives. While this is a welcome development, however, some caution is advised, both in order to ensure that these regulations do not function as barriers to innovation by creating overly rigid obligations, and on the other hand, that they do not take the form of vague and inoperative rules that simply give the false appearance of regulation.

While recommender systems can pose dangers, they can also fulfill a crucial role in democratic society when well developed and properly used, contributing to the realization of fundamental rights and public values. The new legislative initiatives must ensure that these systems work according to and not at odds with these values. The combined force of the AIA proposal and DSA regulation may enhance users’ empowerment and effective choice/control, mitigating potential risks and damages. It is a commendable first step, but we still have a long way to go.


[i] Felfernig, A.; Friedrich, G.; Schmidt-Thieme, L. (2007). “Introduction to the IEEE Intelligent Systems Special Issue: Recommender Systems,” IEEE Intelligent Systems, 22(3), 18-21. https://doi.org/10.1109/MIS.2007.52. p. 18; Kanoje, S.; Girase, S.; Mukhopadhyay, D. (2015). “User Profiling for Recommendation System,” ArXiv, abs/1503.06555. p. 1.

[ii] Article 3 (1) of the Artificial Intelligence Act Proposal: “artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

[iii] Page 13 of the AIA Proposal: “Other manipulative or exploitative practices affecting adults that might be facilitated by AI systems could be covered by the existing data protection, consumer protection and digital service legislation that guarantee that natural persons are properly informed and have free choice not to be subject to profiling or other practices that might affect their behaviour.”

[iv] Article 5 (1) (a) of the Artificial Intelligence Act Proposal.

[v] Article 5 (1) (b) of the Artificial Intelligence Act Proposal.

[i] Felfernig, A.; Friedrich, G.; Schmidt-Thieme, L. (2007). “Introduction to the IEEE Intelligent Systems Special Issue: Recommender Systems,” IEEE Intelligent Systems, 22(3), 18-21. https://doi.org/10.1109/MIS.2007.52. p. 18; Kanoje, S.; Girase, S.; Mukhopadhyay, D. (2015). “User Profiling for Recommendation System,” ArXiv, abs/1503.06555. p. 1.

[ii] Article 3 (1) of the Artificial Intelligence Act Proposal: “artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

[iii] Page 13 of the AIA Proposal: “Other manipulative or exploitative practices affecting adults that might be facilitated by AI systems could be covered by the existing data protection, consumer protection and digital service legislation that guarantee that natural persons are properly informed and have free choice not to be subject to profiling or other practices that might affect their behaviour.”

[iv] Article 5 (1) (a) of the Artificial Intelligence Act Proposal.

[v] Article 5 (1) (b) of the Artificial Intelligence Act Proposal.


The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Israel Public Policy Institute (IPPI) and/or its partners.

Spread the word

Share this Post

Read More