Is this system biased? – How users react to gender bias in an explainable AI system.
Because of its potential to make decision-making more efficient and reliable, artificial intelligence (AI) is increasingly supporting consequential human decisions. However, AI systems have been found to replicate and reinforce social stereotypes by enforcing biases against minorities, women and people of colour. In particular, AI systems have repeatedly been found to reinforce gender biases by favouring men over women.
One approach to mitigating the negative impact of gender bias is to increase the transparency of AI systems by providing explanations using novel Explainable Artificial Intelligence (XAI) methods. Explanations help increase user participation and can help users identify and reject biased decision recommendations from AI systems. However, there are several challenges to research from a user perspective in XAI. First, the developed explanatory XAI methods have not been sufficiently evaluated through studies with users. Therefore, it is unclear how users understand and use the explanations of these methods. Second, the impact that XAI explanations of a biased AI system might have on user perceptions such as trust is unclear.
With our research, we want to investigate how users evaluate XAI explanations and whether these explanations could help them to detect biases in AI systems. In particular, we are interested in investigating the trade-off between positive effects of increasing transparency through XAI explanations and negative effects of bias detection using XAI explanations.
The event will be held in English.
Guest: Since June 2018, Miguel Angel Meza Martinez has been conducting doctoral research at the Institute of Information Systems and Marketing (IISM) at the Karlsruhe Institute of Technology. His research focuses on Interactive Machine Learning and Explainable Artificial Intelligence (XAI). The overall goal of his research is to understand how AI systems can be better designed so that users can interact with the system to improve it. For this collaboration to be successful, AI systems need to be transparent in how they make decisions. For this reason, it is important to provide explanations to users. Designing more transparent AI systems strengthens users' trust in them and influences their acceptance in the long run.
Moderation: Sabine Faller is a research assistant in the department of museum communication at ZKM | Karlsruhe. Her focus is on the conception and implementation of workshops, projects and educational programs in the fields of media art, digital education and online learning – currently for the research project »Digitalization in Dialog – digilog@bw«.
- Organization / Institution
- ZKM | Center for Art and Media Karlsruhe