Towards Accountability in Machine Learning Applications: A System-Testing Approach

ZEW Discussion Paper No. 22-001 // 2022
ZEW Discussion Paper No. 22-001 // 2022

Towards Accountability in Machine Learning Applications: A System-Testing Approach

A rapidly expanding universe of technology-focused startups is trying to change and improve the way real estate markets operate. The undisputed predictive power of machine learning (ML) models often plays a crucial role in the ‘disruption’ of traditional processes. However, an accountability gap prevails: How do the models arrive at their predictions? Do they do what we hope they do – or are corners cut? Training ML models is a software development process at heart. We suggest to follow a dedicated software testing framework and to verify that the ML model performs as intended. Illustratively, we augment two ML image classifiers with a system testing procedure based on local interpretable model-agnostic explanation (LIME) techniques. Analyzing the classifications sheds light on some of the factors that determine the behavior of the systems.

Wan, Wayne Xinwei and Thies Lindenthal (2022), Towards Accountability in Machine Learning Applications: A System-Testing Approach, ZEW Discussion Paper No. 22-001, Mannheim.

Authors Wayne Xinwei Wan // Thies Lindenthal