Skip to main content

Automated machine learning for studying the trade-off between predictive accuracy and interpretability

Freitas, Alex A. (2019) Automated machine learning for studying the trade-off between predictive accuracy and interpretability. In: Lecture Notes in Computer Science. Machine Learning and Knowledge Extraction. International Cross-Domain Conference, CD-MAKE 2019, Canterbury, UK, August 26–29, 2019, Proceedings. 11713. pp. 48-66. Springer ISBN 978-3-030-29725-1. E-ISBN 978-3-030-29726-8. (doi:10.1007/978-3-030-29726-8_4) (KAR id:77014)

PDF Author's Accepted Manuscript
Language: English
Download (587kB) Preview
[img]
Preview
Official URL
https://doi.org/10.1007/978-3-030-29726-8_4

Abstract

Automated Machine Learning (Auto-ML) methods search for the best classification algorithm and its best hyper-parameter settings for each input dataset. Auto-ML methods normally maximize only predictive accuracy, ignoring the classification model’s interpretability – an important criterion in many applications. Hence, we propose a novel approach, based on Auto-ML, to investigate the trade-off between the predictive accuracy and the interpretability of classification-model representations. The experiments used the Auto-WEKA tool to investigate this trade-off. We distinguish between white box (interpretable) model representations and two other types of model representations: black box (non-interpretable) and grey box (partly interpretable). We consider as white box the models based on the following 6 interpretable knowledge representations: decision trees, If-Then classification rules, decision tables, Bayesian network classifiers, nearest neighbours and logistic regression. The experiments used 16 datasets and two runtime limits per Auto-WEKA run: 5 h and 20 h. Overall, the best white box model was more accurate than the best non-white box model in 4 of the 16 datasets in the 5-hour runs, and in 7 of the 16 datasets in the 20-hour runs. However, the predictive accuracy differences between the best white box and best non-white box models were often very small. If we accept a predictive accuracy loss of 1% in order to benefit from the interpretability of a white box model representation, we would prefer the best white box model in 8 of the 16 datasets in the 5-hour runs, and in 10 of the 16 datasets in the 20-hour runs.

Item Type: Conference or workshop item (Proceeding)
DOI/Identification number: 10.1007/978-3-030-29726-8_4
Uncontrolled keywords: classification, machine learning, Auto-ML, interpretable predictive models
Subjects: Q Science > Q Science (General) > Q335 Artificial intelligence
Divisions: Faculties > Sciences > School of Computing > Computational Intelligence Group
Depositing User: Alex Freitas
Date Deposited: 03 Oct 2019 17:41 UTC
Last Modified: 04 Feb 2020 04:11 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/77014 (The current URI for this page, for reference purposes)
Freitas, Alex A.: https://orcid.org/0000-0001-9825-4700
  • Depositors only (login required):

Downloads

Downloads per month over past year