Publication Details
![]() Download |
Malin Eiband, Daniel Buschek, Alexander Kremer, Heinrich Hussmann
The Impact of Placebic Explanations on Trust in Intelligent Systems In CHI '19 EA: Extended Abstracts of the 37th SIGCHI Conference on Human Factors in Computing Systems. Glasgow, Scotland UK, May 4 - 9, 2019. ACM, New York, NY, USA. |
Work in social psychology on interpersonal interaction [5] has demonstrated that people are more likely to comply to a request if they are presented with a justification â even if this justification conveys no information. In the light of the many calls for explaining reasoning of interactive intelligent systems to users, we investigate whether this effect holds true for human-computer interaction. Using a prototype of a nutrition recommender, we conducted a lab study (N=30) between three groups (no explanation, placebic explanation, and real explanation). Our results indicate that placebic explanations for algorithmic decision-making may indeed invoke perceived levels of trust similar to real explanations. We discuss how placebic explanations could be considered in future work. |