Inference and Explanation

Twenty-five years ago Mohammed Sqalli and I wrote a paper titled Inference-Based Constraint Satisfaction Supports Explanation. From the abstract:

Constraint satisfaction problems are typically solved using search, augmented by general purpose consistency inference methods. This paper proposes a paradigm shift in which inference is used as the primary problem solving method, and attention is focused on special purpose, domain specific inference methods. While we expect this approach to have computational advantages, we emphasize here the advantages of a solution method that is more congenial to human thought processes.

“Domain specific” may be misleading here. While the inference methods we introduced were motivated by our “case study” — logic puzzles — they involved “global constraints”, which were not tied solely to that specific domain. Introducing global constraints is something of a cottage industry, and there is a Global Constraint Catalog, which currently has hundreds of specialized constraints and many pointers to associated specialized filtering (inference) methods.

Nicolas Beldiceanu and Helmut Simonis developed tools to automate the use of the Catalog: Constraint Seeker, which provided “a web interface to search for global constraints in the global constraint catalog, given positive and negative, fully instantiated (ground) examples”, and Model Seeker, which generated “finite domain constraint models from positive example solutions, for highly structured problems”. Model Seeker “is based on the global constraint catalog, providing the library of constraints that can be used in modeling, and the Constraint Seeker tool”.

Explanation, of course, is a hot topic now in AI — XAI — in particular with regard to combating bias. There has been considerable discussion of a “right to explanation“.

This post was prompted, in particular, by a recent paper on explanation, Efficiently Explaining CSPs with Unsatisfiable Subset Optimization, by Emilio Gamba, Bart Bogaerts, and Tias Guns, which cites Mohammed and me, as well as work I did with Chavalit Likitvivatanavong, and Richard Wallace. This paper builds upon an ECAI-20 paper Step-wise Explanations of Constraint Satisfaction Problems (see the video), which in turn was motivated by the winning entry in the PTHG-19 Workshop‘s Holy Grail Challenge (see the workshop talk). From the abstract:

We build on a recently proposed method for explaining solutions of constraint satisfaction problems. An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem. We build on these formal foundations and tackle two emerging questions, namely how to generate explanations that are provably optimal (with respect to the given cost metric) and how to generate them efficiently.

Leave a comment