Approaches to Analyse and Evaluate AI-Based Recommendation Systems for Internet Intermediaries

Recommender systems of internet platforms play a crucial role in the everyday lives of many people. Whether we use a search engine, social media, or video platform, algorithms, artificial intelligence (AI) and statistical calculations determine which content is displayed to users, in which order, and in which context. Despite the importance of these systems for our daily lives, but also the way we inform ourselves and how we communicate with each other,  their design is difficult to understand for users as well as for politicians, researchers, and civil society. Whether certain content is systematically disadvantaged or favored, whether recommender systems amplify hate and disinformation, and how user behavior, algorithms, and platform design intertwine are highly relevant questions for democracy. Politicians have reacted to this by developing new regulatory measures to ensure higher transparency and independent auditing of algorithmic systems.  

However, regulators lack the necessary resources and technical skills to investigate how recommendation systems work and how the audits, risk assessments, and transparency requirements should be formulated. The same applies to the question of who can and should be responsible for holding internet platforms and their algorithms to account. Thus, developing approaches that examine AI recommendation systems, which role they play in the design of platforms, and their impact on public discourse, society, and democracy is urgently needed.

The project "Approaches to Analyse and Evaluate AI-Based Recommendation Systems for Internet Intermediaries"is dedicated to addressing this challenge by looking at "AI-based recommendation systems" and the related design practices of platforms more closely.   

Within the framework of the project, an interdisciplinary team from academia, media regulators, NGOs, and the platforms themselves will develop approaches that address the question of what a comprehensive, meaningful analysis of the platforms and their effects and risks might entail and how it should be conducted. In this context, both the technical dimension of the functioning of AI recommendation systems on the platforms and the social dimension with regard to the platform design and the interaction with the users will be taken into account.

 

Please find more information on SNV's work on the TikTok Audit here.

 

Go back to Strengthening the Digital Public Sphere and Platform Regulation