ZEW Discussion Paper Nr. 22-071 // 2022

Algorithmic Advice as a Credence Good

Actors in various settings have been increasingly relying on algorithmic tools to support their decision-making. Much of the public debate concerning algorithms - especially the associated regulation of new technologies - rests on the assumption that humans can assess the quality of algorithms. We test this assumption by conducting an online experiment with 1263 participants. Subjects perform an estimation task and are supported by algorithmic advice. Our first finding is that, in our setting, humans cannot verify the algorithm's quality. We, therefore, argue that algorithms exhibit traits of a credence good - decision-makers cannot verify the quality of such goods, even after "consuming" them. Based on this finding, we test two interventions to improve the individual's ability to make good decisions in algorithmically supported situations. In the first intervention, we explain the way the algorithm functions. We find that while explanation helps participants recognize bias in the algorithm, it remarkably decreases human decision-making performance. In the second treatment, we reveal the task's correct answer after every round and find that this intervention improves human decision-making performance. Our findings have implications for policy initiatives and managerial practice.

Biermann, Jan, John Horton und Johannes Walter (2022), Algorithmic Advice as a Credence Good, ZEW Discussion Paper Nr. 22-071, Mannheim.

Autoren/-innen Jan Biermann // John Horton // Johannes Walter