Responsible Artificial Intelligence
The research area “Responsible Artificial Intelligence” summarizes our work that deals with the role of AI in business and society and that addresses the positive, but also the negative impact of AI on people and organizations. Here, we examine the role of algorithms in decision-making and which dimensions contribute to fair AI. We also address the responsibility of companies and individuals in the development and use of AI-based systems as well as the design options for AI governance and AI ecosystems.
Algorithms in decision making
AI, once the subject of human imagination and the stuff of science fiction novels, is increasingly becoming a reality across industries. Advances in computing power and the availability of vast amounts of data have led companies to invest heavily in the power of AI solutions. Unlike dystopian science fiction novels, however, the current use of AI is not about putting machines in control of the world. Rather, the high accuracy of AI and its underlying algorithms have the potential to optimize human decision-making and lead to significant economic benefits.
In this regard, the idea of optimizing decision-making processes of human decision-makers by automating decision-making processes is not new. Decades of research show that algorithms can make more accurate predictions than humans in many decision-making situations. However, recent significant investments in AI seem to be at odds with current adoption rates. Several studies have shown that people distrust algorithmic results in a variety of decision areas, a phenomenon that has been termed “algorithm aversion.” Experts, as well as laypeople, often do not want to rely on algorithms and prefer a human prediction to that of an algorithm.
This general distrust is costly and jeopardizes optimal decision outcomes. For this reason, the research area is devoted to factors that have an influence on the evaluation of algorithms. It also addresses questions of how fair algorithms can actually be and how we can sustainably ensure successful cooperation between humans and machines.
AI fairness and accountability
In a world where AI is increasingly present in our everyday lives, it is more important than ever to understand and minimize the potential negative impacts of this technology. Through our research on AI Fairness and Algorithmic Accountability, we seek to understand how AI systems can lead to harmful effects and how they need to be designed so that companies and individuals alike reap the benefits of AI without violating human rights and dignity. In doing so, we explore how companies can implement ethical and legal requirements that arise from the development and use of AI.
AI Governance and Ecosystems
AI is always in the headlines due to discrimination and disadvantage of marginalized groups. The fact is that AI is just a mirror of our society and shows what is already present in our society and economy so far. For AI to make a critical contribution to fulfilling human rights and helping to achieve the Sustainable Development Goals (SDGs), companies need governance for AI. The task of this AI governance is to ensure that the development and use of AI in the company follows the company’s goals, respects regulations and does not violate ethical principles. Organizational AI governance is still a young field of research and is still at the very beginning in many companies. However, AI governance is very critical because it sets the framework for future impacts of AI on our economy and society.
In practice, the activities required to develop and implement artificial intelligence system in business areas – from data collection and preparation to model training, continuous monitoring and maintenance – require (new) and versatile (digital) skills and resources that are usually difficult to find in a company or with a single vendor. Companies therefore need to make major organizational changes and/or collaborate with existing and new partners to gain access to the required capabilities and resources.
With our research on AI ecosystems, we try to understand from different perspectives which structures, roles, capabilities and resources are essential for the realization of an AI system in addition to the necessary (technical components) as well as which business models and which challenges and potentials are associated with this form of value creation.