Publication Details
Download |
Michael Chromik, Martin Schuessler
A Taxonomy for Human Subject Evaluation of Black-Box Explanations in XAI In Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies at IUI'20, Cagliari, Italy, March 17 - 20, 2020. ACM, New York, NY, USA (bib) |
The interdisciplinary field of explainable artificial intelligence (XAI) aims to foster human understanding of black-box machine learning models through explanation methods. However, there is no consensus among the involved disciplines regarding the evaluation of their effectiveness - especially concerning the involvement of human subjects. For our community, such involvement is a prerequisite for rigorous evaluation. To better understand how researchers across the disciplines approach human subject XAI evaluation, we propose developing a taxonomy that is iterated with a systematic literature review. Approaching them from an HCI perspective, we analyze which study designs scholar chose for different explanation goals. Based on our preliminary analysis, we present a taxonomy that provides guidance for researchers and practitioners on the design and execution of XAI evaluations. With this position paper, we put our survey approach and preliminary results up for discussion with our fellow researchers. |