Страница публикации

Assessing Heuristic Machine Learning Explanations with Model Counting

Тип публикации: Статья в журнале

Тип материала: Текст

Авторы: Narodytska N., Shrotri A., Meel K.S., Ignatiev A., Marques-Silva J.

Журнал: Lecture Notes in Computer Science: Proc. of 22nd Intern. Conf. on Theory and Applications of Satisfiability Testing (SAT'2019; Lisbon; Portugal)

Язык публикации: english

Серия книг: Lecture Notes in Computer Science

Том: 11628

Номера страниц: 267-278

Количество страниц: 12

Год публикации: 2019

Отчетный год: 2019

DOI: 10.1007/978-3-030-24258-9_19

Аннотация: Machine Learning (ML) models are widely used in decision making procedures in finance, medicine, education, etc. In these areas, ML outcomes can directly affect humans, e.g. by deciding whether a person should get a loan or be released from prison. Therefore, we cannot blindly rely on black box ML models and need to explain the decisions made by them. This motivated the development of a variety of ML-explainer systems, including LIME and its successor Anchor. Due to the heuristic nature of explanations produced by existing tools, it is necessary to validate them. We propose a SAT-based method to assess the quality of explanations produced by Anchor. We encode a trained ML model and an explanation for a given prediction as a propositional formula. Then, by using a state-of-the-art approximate model counter, we estimate the quality of the provided explanation as the number of solutions supporting it.

Индексируется WOS: Q4

Индексируется Scopus: Нет

Индексируется УБС: Нет

Индексируется РИНЦ: Да

Индексируется ВАК: Нет

Индексируется CORE: Нет