Category Archives: Citations

Small Islands

Semantic Scholar has informed me that my chapter with Alan Mackworth in the Handbook of Constraint Programming on Constraint Satisfaction: An Emerging Paradigm has been cited in the PREFACE to the proceedings of the International Conference on Small Islands Community: Momentum of Transformation of Small Islands Community in New Normal Era 12th December 2020, Maluku, Indonesia.

This was quite intriguing. However, it seems clear that we weren’t, in fact, cited in the preface. Perhaps in one of the papers? There are several papers in the proceedings that I would be particularly intrigued to see our connection to:

Community structure of conches (Strombus spp) in seagrass bed of Haria, Central Maluku, Indonesia

Fattening of mangrove crab (Scylla serrata) in wooden pen and plastic container

Feeding habits of giant trevally Caranx ignobilis (Carangidae) rearing in floating cage nets

Utilization of garlic as traditional fish handling in Molluccas Islands: case study on layang fish (Decapterus macrosoma, BLECKER)

Chemical properties of dried anchovy (Stolephorus sp) from Buru Island

Any of these would provide excellent further support for the claim I made in my IJCAI-20 award talk for The Ubiquity of Constraints. However, I fear Semantic Scholar has just been misled somehow.

Citation Sighting

I won’t pretend to be inundated with citations, but it is pleasant to still be getting them, and Semantic Scholar can make it easy to hear about them, by sending you emails. However, as we’ll see Semantic Scholar can sometimes use a little help 🙂 (I want to be clear though that my intention is not to diss Semantic Scholar, which is a wonderful service.)

Here are 6 citations for 6 papers that Semantic Scholar alerted me to in the past 6 days:

I received a notice that “Who ? What ? Where ? When ? Why ? and How ? 2” had cited Explaining ourselves: Human-aware constraint reasoning. Interesting title, but actually it turns out that the citing paper is titled Explainable Security. It appeared in Proceedings of the 6th Workshop on Hot Issues in Security Principles and Trust (HotSpot 2020) (another interesting title). The authors, Viganò and Magazzeni, propose “a new paradigm in security research: Explainable Security (XSec)” and discuss ‘the “Six Ws” of XSec (Who? What? Where? When? Why? and How?)’. So that explains part of the mystery of Semantic Scholar’s title for the paper, but it is still a mystery.

I was informed that Error Function Learning with Interpretable Compositional Networks for Constraint-Based Local Search cited two of my papers, In Pursuit of the Holy Grail and Progress Towards the Holy Grail. Semantic Scholar’s listing of the first paper in the References points to an entry that itself claims only 10 citations. That seemed low to me, and then I realized it was the version of the paper in ACM Computing Surveys. The version in the Constraints journal, which is the one the paper actually cites, has 59 citations. Semantic Scholar includes the second paper three times in the References, but twice in odd ways, which if hovered over declare that the paper is not yet in Semantic Scholar’s corpus.

In fact, the paper cites three of my papers; it also cites Holy Grail Redux. However, Semantic Scholar thinks Holy Grail Redux was authored by “S. Nadis” so I was not notified of this citation. Perhaps S. Nadis was and is wondering why. When you click through on Holy Grail Redux in Semantic Scholar’s list of References for this paper you arrive at a listing for Holy Grail Redux where the “Topics from this paper” include only “Bad Mojo”, and when you click on View via Publisher, you end up here!

Taking Advantage of Stable Sets of Variables in Constraint Satisfaction Problems, which I co-authored with Mike Quinn was cited by two papers. In Neural Regret-Matching for Distributed Constraint Optimization Problems, Yanchen Deng , Runshen Yu , Xinrun Wang and Bo An are “incorporating deep neural networks in solving DCOPs for the first time”. In Utility distribution matters: enabling fast belief propagation for multi-agent optimization with dense local utility function Yanchen Deng and Bo An: “theoretically show the correctness and superiority of our proposed algorithms”.

Finally, Semantic Scholar informed me that Exploring Automatic Fitness Evaluation for Evolutionary Typesetting cited a paper, Creating Personalized Documents: An Optimization Approach, which I wrote with Lisa Purvis, Steven Harrington and Barry O’Sullivan. In their paper SĂ©rgio M. Rebelo, Tiago Martins, J. Bicker and P. Machado “present an evolutionary system for the automatic typesetting of typographic posters”. In our paper we “formalized custom document creation as a multiobjective optimization problem, and use a genetic algorithm to assemble and transform compound personalized documents”. Unfortunately, while Semantic Scholar knew enough to alert me to the citation, the one reference it lists at the citing paper’s webpage is not our paper.

I received the first notice from Semantic Scholar on June 20th and the last on June 25th, so 6 citations for 6 papers in 6 days. Just a nice coincidence, not any kind of record to be sure. Google Scholar says that Geoffrey Hinton received 85,082 citations in 2020, which works out to approximately 233 per day. Well actually “only” approximately 232, because 2020 was a leap year.

It did feel like a long year, didn’t it.

Inference and Explanation

Twenty-five years ago Mohammed Sqalli and I wrote a paper titled Inference-Based Constraint Satisfaction Supports Explanation. From the abstract:

Constraint satisfaction problems are typically solved using search, augmented by general purpose consistency inference methods. This paper proposes a paradigm shift in which inference is used as the primary problem solving method, and attention is focused on special purpose, domain specific inference methods. While we expect this approach to have computational advantages, we emphasize here the advantages of a solution method that is more congenial to human thought processes.

“Domain specific” may be misleading here. While the inference methods we introduced were motivated by our “case study” — logic puzzles — they involved “global constraints”, which were not tied solely to that specific domain. Introducing global constraints is something of a cottage industry, and there is a Global Constraint Catalog, which currently has hundreds of specialized constraints and many pointers to associated specialized filtering (inference) methods.

Nicolas Beldiceanu and Helmut Simonis developed tools to automate the use of the Catalog: Constraint Seeker, which provided “a web interface to search for global constraints in the global constraint catalog, given positive and negative, fully instantiated (ground) examples”, and Model Seeker, which generated “finite domain constraint models from positive example solutions, for highly structured problems”. Model Seeker “is based on the global constraint catalog, providing the library of constraints that can be used in modeling, and the Constraint Seeker tool”.

Explanation, of course, is a hot topic now in AI — XAI — in particular with regard to combating bias. There has been considerable discussion of a “right to explanation“.

This post was prompted, in particular, by a recent paper on explanation, Efficiently Explaining CSPs with Unsatisfiable Subset Optimization, by Emilio Gamba, Bart Bogaerts, and Tias Guns, which cites Mohammed and me, as well as work I did with Chavalit Likitvivatanavong, and Richard Wallace. This paper builds upon an ECAI-20 paper Step-wise Explanations of Constraint Satisfaction Problems (see the video), which in turn was motivated by the winning entry in the PTHG-19 Workshop‘s Holy Grail Challenge (see the workshop talk). From the abstract:

We build on a recently proposed method for explaining solutions of constraint satisfaction problems. An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem. We build on these formal foundations and tackle two emerging questions, namely how to generate explanations that are provably optimal (with respect to the given cost metric) and how to generate them efficiently.

Monarch of All I Survey

Kostas Stergiou has published in Artificial Intelligence Review a very interesting “review and evaluation” of adaptive constraint propagation in constraint satisfaction. It surveys, classifies, and advances the subject, and suggests directions for further work.

With the current prominence of machine learning, I was happy to see that Stergiou unearthed an old paper Rick Wallace and I wrote on Selective Relaxation for Constraint Satisfaction Problems, which might be regarded as including an early application of machine learning to CP. That presupposes a rather generous definition of “learning”, which might even apply to another of the cited papers, one I wrote with Dan Sabin on Understanding and Improving the MAC Algorithm. “Adaptation” might be a better term. A more straightforward example of learning is the paper cited that I wrote with Susan Epstein, Rick Wallace and Xingjian Li on Learning Propagation Policies. Kostas concludes his proposals for future work with a timely call for exploring the further potential of machine learning. (Incidentally, I ran across a paper on ArXiv recently that applys deep learning to the related issue of variable ordering.)

I’d like to see more survey papers being published. It occurs to me that every thesis will include the makings of such a paper in a “related work” section or chapter. I would encourage students not just to publish papers based on the novel results in their thesis, but to publish a survey article based on their thesis work as well. The Constraints journal even has a “Surveys Editor”, currently Pascal Van Hentenryck.

I have added a link to Kostas’ paper in the “Surveys/Overviews/Bibliographies” section of my Resources website. If you publish a new survey, or have published one that is not already linked to at the website, let me know so I can link to it.

Here, There, and Everywhere

I continue to be struck by the variety of applications found in my Google or Semantic Scholar “alerts” for papers citing mine. Of course, this doesn’t necessarily mean that my work played any serious role in theirs, but since the papers of mine cited almost certainly involve constraint satisfaction it does illustrate the way constraint satisfaction can relate to so many other fields. Some recent examples:

A Detective Story

I recently received the following Google Scholar alert for a “new” citation to my work:

[HTML] page 2

S Woods, A Quilici, Q Yang

Different program understanding algorithms often use different representational 
frameworks and take advantage of numerous heuristic tricks. This situation makes it 
is difficult to compare these approaches and their performance. This paper 
addresses this problem by proposing constraint satisfaction as a general framework 
for describing program understanding algorithms, demonstrating how to tranform a 
relatively complex existing program understanding algorithm into an instance of a â€¦”

“page 2” seemed rather an odd title for a paper. 🙂 Turns out it was not really a new citation, but seemed to refer to “page 2” of a 1995 tech report. But the tech report actually had a very intriguing title: Program Understanding: A Constraint Satisfaction Modeling Framework; Understanding as Plan Recognition.

I found a later related published work: Steven G. Woods, Qiang Yang: Program Understanding as Constraint Satisfaction: Representation and Reasoning TechniquesAutom. Softw. Eng. 5(2): 147-181 (1998).

And a whole book: Constraint-Based Design Recovery for Software Reengineering: Theory and Experiments. Woods, Steven G., Quilici, Alexander E., Qiang Yang.

Now I’m acquainted with Qiang Yang; we served on the AAAI Executive Council together. (Qiang was also President of IJCAI and General Chair of AAAI-21, among many other accomplishments.) I was curious why Qiang didn’t seem to have followed up on this intriguing work after 1998. Had it been a dead end? Had he simply moved on to other things?

I discovered that Steven Woods was Qiang’s Ph.D. student. Woods himself co-authored a paper on “Program plan matching: experiments with a constraint-based approach” in 2000, but then disappeared off the face of DBLP shortly thereafter. On to Wikipedia: In 1998 Steven co-founded a company, and went on to a distinguished career as an entrepreneur and industry executive, According to LInkedin he is now Senior Engineering Director and Site Lead for Google, and Google’s Tech Lead responsible for representing Google in Canada.

So it appears that industry’s gain was constraint programming and sofware engineering’s loss. The work has continued to be cited. Semantic Scholar provides a citation to the above book from as recently as recently as 2017:

YAZILIM YENİDEN YAPILAMAYA YÖNELİK BİR KURUMSAL MİMARI: MODEL GÜDÜMLÜ VE ONTOLOJİ TABANLI BİR YAKLAŞIM

which Google Translate tells me is Turkish for:

A CORPORATE ARCHITECTURE FOR SOFTWARE RECONSTRUCTION: A MODEL-DRIVEN AND ONTOLOGY-BASED APPROACH

But if there is someone out there with an interest in applying constraint satisfaction to program understanding, I suspect there is fertile ground yet to explore here. And perhaps you could get support from Google, especially if you live in Canada. 🙂

Configuration

I just received a notification from Google Scholar that Automated Configuration and Selection of SAT Solvers by Holger Hoos, Frank Hutter, and Kevin Leyton-Brown provided the 100th citation of my paper with Susan Epstein, Richard Wallace, Anton Morozov, and Bruce Samuels on The Adaptive Constraint Engine. Which is cool, but the funny thing is that when I was scanning my mailbox, “configuration” caught my eye, and I thought that the citation would be to my work on configuration.

There has been a long series of configuration workshops, going back to one organized by Boi Faltings, Gerhard Friedrich and myself in 1999. To quote from the call for papers for the 2020 workshop: “Configuration is as a special type of design approach where the product being configured is composed from instances of a set of predefined component types that can be combined in ways defined by a set of constraints.”

Configuration provides a great example of the practical application of constraint satisfaction. A good place to see this is a 2016 paper in AI Magazine by Andreas Falkner, Gerhard Friedrich, Alois Haselböck, Gottfried Schenner, and Herwig Schreiner on Twenty-Five Years of Successful Application of Constraint Technologies at Siemens.

Resource Controllability of Business Processes Under Conditional Uncertainty

Resource Controllability of Business Processes Under Conditional Uncertainty Matteo Zavatteri, Carlo Combi & Luca Viganò. Journal on Data Semantics (2021)

This paper proposes “conditional constraint networks with decisions (CCNDs) as a model to encode business processes that involve access control and conditional branches that may be both controllable and uncontrollable”. So it both provides a new extension of the CSP paradigm and a real-world motivation for it. The main citation to my work is to a paper with Mihaela Sabin on conditional constraint satisfaction.

I’m particularly happy to draw attention to this paper, as constraints researchers are perhaps unlikely to subscribe to the Journal on Data Semantics. Moreover, it appears that the journal will be ceasing publication this year. I’ve had the experience myself a few times of publishing in a journal that died shortly afterward; I’ve told myself that I was not culpable. I see that Zavatteri has a paper on Dynamic Controllability and (J,K)-Resiliency in Generalized Constraint Networks with Uncertainty in ICAPS 2020. Hope to see him publishing at the CP conference.

MCS Extraction with Sublinear Oracle Queries

Carlos Mencía, Alexey Ignatiev, Alessandro Previti, Joao Marques-Silva:
MCS Extraction with Sublinear Oracle Queries. SAT 2016: 342-360

Abstract. Given an inconsistent set of constraints, an often studied problem is to compute an irreducible subset of the constraints which, if relaxed, enable the remaining constraints to be consistent. In the case of unsatisfiable propositional formulas in conjunctive normal form, such irreducible sets of constraints are referred to as Minimal Correction Subsets (MCSes). MCSes find a growing number of applications, including the approximation of maximum satisfiability and as an intermediate step in the enumeration of minimal unsatisfiability. A number of efficient algorithms have been proposed in recent years, which exploit a wide range of insights into the MCS extraction problem. One open question is to find the best worst-case number of calls to a SAT oracle, when the calls to the oracle are kept simple, and given reasonable definitions of simple SAT oracle calls. This paper develops novel algorithms for computing MCSes which, in specific settings, are guaranteed to require asymptotically fewer than linear calls to a SAT oracle, where the oracle calls can be viewed as simple. The experimental results, obtained on existing problem instances, demonstrate that the new algorithms contribute to improving the state of the art.

Relating as it does to explanation, this is a timely citation. The authors have made numerous contributions to this area.

The citation is to:

Barry O’Callaghan, Barry O’Sullivan, Eugene C. Freuder: Generating Corrective Explanations for Interactive Constraint Satisfaction. CP 2005: 445-459

Abstract
Interactive tasks such as online configuration and e-commerce can be modelled as constraint satisfaction problems (CSPs). These can be solved interactively by a user assigning values to variables. The user may require advice and explanations from a system to help him/her find a satisfactory solution. Explanations of failure in constraint programming tend to focus on conflict. However, what is really desirable is an explanation that is corrective in the sense that it provides the basis for moving forward in the problem-solving process. More specifically, when faced with a dead-end, or when a desirable value has been removed from a domain, we need to compute alternative assignments for a subset of the assigned variables that enables the user to move forward. This paper defines this notion of corrective explanation, and proposes an algorithm to generate such explanations. The approach is shown to perform well on both real-world configuration benchmarks and randomly generated problems.