Страница публикации
Assessing Heuristic Machine Learning Explanations with Model Counting
Авторы: Narodytska N., Shrotri A., Meel K.S., Ignatiev A., Marques-Silva J.
Журнал: Lecture Notes in Computer Science: Proc. of 22nd Intern. Conf. on Theory and Applications of Satisfiability Testing (SAT'2019; Lisbon; Portugal)
Том: 11628
Номер:
Год: 2019
Отчётный год: 2019
Издательство:
Местоположение издательства:
URL:
Проекты:
DOI: 10.1007/978-3-030-24258-9_19
Аннотация: Machine Learning (ML) models are widely used in decision making procedures in finance, medicine, education, etc. In these areas, ML outcomes can directly affect humans, e.g. by deciding whether a person should get a loan or be released from prison. Therefore, we cannot blindly rely on black box ML models and need to explain the decisions made by them. This motivated the development of a variety of ML-explainer systems, including LIME and its successor Anchor. Due to the heuristic nature of explanations produced by existing tools, it is necessary to validate them. We propose a SAT-based method to assess the quality of explanations produced by Anchor. We encode a trained ML model and an explanation for a given prediction as a propositional formula. Then, by using a state-of-the-art approximate model counter, we estimate the quality of the provided explanation as the number of solutions supporting it.
Индексируется WOS: Q4
Индексируется Scopus: Нет
Индексируется УБС: Нет
Индексируется РИНЦ: Да
Индексируется ВАК: Нет
Индексируется CORE: Нет
Публикация в печати: 0