AI fairness and accountability
Overview
In a world where AI is increasingly present in our everyday lives, it is more important than ever to understand and minimize the potential negative impacts of this technology. Through our research on AI Fairness and Algorithmic Accountability, we seek to understand how AI systems can lead to harmful effects and how they need to be designed so that companies and individuals alike reap the benefits of AI without violating human rights and dignity. In doing so, we explore how companies can implement ethical and legal requirements that arise from the development and use of AI.
Cooperations
No cooperations available.
Projects & Talks
Project title: „Explaining and Mitigating Information-Limiting Environments in Personalised News Platforms (EMILE)“
In the digital age, news is increasingly consumed online via personalised news platforms (PNPs). These offer highly personalised news feeds through algorithmic news recommendation systems (NRS). While these platforms offer numerous advantages, they also present a risk of the emergence of information-limiting environments (ILEs). These include filter bubbles and echo chambers. Filter bubbles only present individual users with a selective subset of the news offer, while echo chambers can be described as overlapping filter bubbles of several users. Both phenomena have the potential to promote hate speech, misinformation, and polarization, thereby restricting freedom of discourse. While ILEs have been conceptually described and in some cases empirically proven, there is a lack of well-founded empirical evidence on the conditions under which ILEs occur and to what extent. This is due to an inconsistent operationalisation of ILEs and an insufficient empirical investigation of many potential ILE antecedents. Furthermore, there is a paucity of empirical evidence regarding the efficacy of potential countermeasures. Therefore, the proposed research project initially aims to develop a uniform operationalisation of ILEs that takes into account both an objective bias in information consumption and the lack of awareness of the user concerned. This will be achieved through the application of online experiments, which will result in the development of an empirically validated explanatory model for the emergence of ILEs. Following this, interventions will be designed and empirically evaluated to motivate users to consume news in a more diverse manner. This will be achieved by increasing the diversity of news in their newsfeed and by persuading them to actively consume the additional news. The effectiveness of the interventions will be measured by an empirical comparative study that examines both the changes in consumer behaviour and the acceptance of the interventions. The proposed project is based on several preliminary studies that have been published at relevant conferences in the field of information systems. The project duration is planned to be three years. The project would make a theoretical contribution by developing a better causal understanding of ILEs in PNPs and a standardised, comparable ILE metric. Furthermore, the project would make practical and societal contributions by developing evidence-based measures to reduce ILEs in PNPs that are voluntarily adopted by users. This would enable PNP operators and legislators to prevent ILEs without restricting the rights and freedoms of users, thus preventing the increase of misinformation and hate speech online.
Project title: “Realization of Ethically Responsible Information Systems for Strategic Decision Making (REVISE)”.When companies or other institutions use data-driven artificial intelligence methods to make or inform strategic decisions, the question arises as to what ethical requirements should be placed on the information systems in question and how these can be realized technically. The aim of this project is to develop and evaluate, in collaboration with Merck, concepts and procedures that support decision makers in anticipating the ethical implications of using such systems and ensuring their responsible use.