Artículo
BEEF: Balanced English Explanations of Forecasts
Fecha de publicación:
18/04/2019
Editorial:
Institute of Electrical and Electronics Engineers
Revista:
IEEE Transactions on Computational Social Systems
e-ISSN:
2329-924X
Idioma:
Inglés
Tipo de recurso:
Artículo publicado
Clasificación temática:
Resumen
The problem of understanding the reasons behind why different machine learning classifiers make specific predictions is a difficult one, mainly because the inner workings of the algorithms underlying such tools are not amenable to the direct extraction of succinct explanations. In this paper, we address the problem of automatically extracting balanced explanations from predictions generated by any classifier, which include not only why the prediction might be correct but also why it could be wrong. Our framework, called Balanced English Explanations of Forecasts, can generate such explanations in natural language. After showing that the problem of generating explanations is NP-complete, we focus on the development of a heuristic algorithm, empirically showing that it produces high-quality results both in terms of objective measures - with statistically significant effects shown for several parameter variations - and subjective evaluations based on a survey completed by 100 anonymous participants recruited via Amazon Mechanical Turk.
Palabras clave:
DECISION SUPPORT SYSTEMS
,
KNOWLEDGE ENGINEERING
,
MACHINE LEARNING
Archivos asociados
Licencia
Identificadores
Colecciones
Articulos (ICIC)
Articulos de INSTITUTO DE CS. E INGENIERIA DE LA COMPUTACION
Articulos de INSTITUTO DE CS. E INGENIERIA DE LA COMPUTACION
Citación
Grover, Sachin; Pulice, Chiara; Simari, Gerardo; Subrahmanian, Venkatramanan; BEEF: Balanced English Explanations of Forecasts; Institute of Electrical and Electronics Engineers; IEEE Transactions on Computational Social Systems; 6; 2; 18-4-2019; 350-364
Compartir
Altmétricas