Author Archives: Gene Freuder

Unknown's avatar

About Gene Freuder

Emeritus Professor of Computer Science. AAAS, AAAI, ECAI Fellow. Member of the Royal Irish Academy.

Call for Submissions to PTHG-21

PTHG-21: The Fifth Workshop on Progress Towards the Holy Grail

October 25th, 2021, at CP2021

This Workshop is one of a series.

Note: CP2021 and its workshops are virtual, with free registration.

Description:

In 1996 the paper “In Pursuit of the Holy Grail” (also here) proposed that Constraint Programming was well-positioned to pursue the Holy Grail of computer science: the user simply states the problem and the computer solves it. It was followed about a decade later by “Holy Grail Redux“, and then about a decade after that by “Progress Towards the Holy Grail“. This series of workshops aims to encourage and disseminate progress towards that goal, in particular regarding work on automating:

  • Acquisition: user-interaction, learning, debugging, maintaining, etc. 
  • Reformulation: transformation for efficient solution, redundant models, etc. 
  • Solving: adaptive parameter tuning, automated selection from portfolios, learning heuristics, deep learning, etc. 
  • Explanation: reasons for failure, implications for choices, etc. 

Of particular interest is the intersection of the Holy Grail goal with the increasing attention being paid to machine learning, explainable AI, and human-centric AI.

Organizing Committee:

Chair: Eugene Freuder, University College Cork, Ireland, eugene.freuder@insight-centre.org 

Christian Bessiere, University of Montpellier, France 

Tias Guns, KU Leuven, Belgium 

Lars Kotthoff, University of Wyoming, USA  

Ian Miguel, University of St Andrews, Scotland  

Michela Milano, University of Bologna, Italy 

Helmut Simonis, University College Cork, Ireland 

Submissions:

Submissions may be of any length, and in any format. They may be abstracts, position papers, technical papers, or demos. They may review your own previous work or survey a topic area. They may present new research or suggest directions for further progress. They may propose research roadmaps, demonstration domains, or collaborative projects. They may be proposals for measuring progress, and, in particular, for data sets or competitions to stimulate and compare progress. 

Previously Published Track. Authors are encouraged to submit to this track pointers to relevant papers that they have published elsewhere since the date of the last workshop, PTHG-20, September 7, 2020. The objective is to further the Workshop goal of disseminating progress in this area. 

Submissions should be emailed, in PDF form, with subject line “PTHG-21 Submission”, directly to the Workshop chair, at: eugene.freuder@insight-centre.org.

Submissions to the Previously Published Track should be in the form of a PDF that clearly identifies it as a submission to the Previously Published Track, contains bibliographic information on the previous publication, and provides a URL pointing to the paper (if possible without violating copyright, to a full version of the paper).

Authors may make multiple submissions if they wish. All submissions that appropriately address the topic of the workshop will be accepted as is, without further revision, and will be made available at the workshop website.

The deadline for submissions is September 15, 2021. Decisions on acceptance will be sent by September 20, 2021. 

Authors of accepted submissions will be expected to upload a video presentation of the requested length by September 30, 2021; otherwise the submission will be withdrawn from the program and proceedings (if any). The conference will host these videos. Details of the upload process will become available. 

Website: PTHG-21: The Fifth Workshop on Progress Towards the Holy Grail

Small Islands

Semantic Scholar has informed me that my chapter with Alan Mackworth in the Handbook of Constraint Programming on Constraint Satisfaction: An Emerging Paradigm has been cited in the PREFACE to the proceedings of the International Conference on Small Islands Community: Momentum of Transformation of Small Islands Community in New Normal Era 12th December 2020, Maluku, Indonesia.

This was quite intriguing. However, it seems clear that we weren’t, in fact, cited in the preface. Perhaps in one of the papers? There are several papers in the proceedings that I would be particularly intrigued to see our connection to:

Community structure of conches (Strombus spp) in seagrass bed of Haria, Central Maluku, Indonesia

Fattening of mangrove crab (Scylla serrata) in wooden pen and plastic container

Feeding habits of giant trevally Caranx ignobilis (Carangidae) rearing in floating cage nets

Utilization of garlic as traditional fish handling in Molluccas Islands: case study on layang fish (Decapterus macrosoma, BLECKER)

Chemical properties of dried anchovy (Stolephorus sp) from Buru Island

Any of these would provide excellent further support for the claim I made in my IJCAI-20 award talk for The Ubiquity of Constraints. However, I fear Semantic Scholar has just been misled somehow.

Citation Sighting

I won’t pretend to be inundated with citations, but it is pleasant to still be getting them, and Semantic Scholar can make it easy to hear about them, by sending you emails. However, as we’ll see Semantic Scholar can sometimes use a little help 🙂 (I want to be clear though that my intention is not to diss Semantic Scholar, which is a wonderful service.)

Here are 6 citations for 6 papers that Semantic Scholar alerted me to in the past 6 days:

I received a notice that “Who ? What ? Where ? When ? Why ? and How ? 2” had cited Explaining ourselves: Human-aware constraint reasoning. Interesting title, but actually it turns out that the citing paper is titled Explainable Security. It appeared in Proceedings of the 6th Workshop on Hot Issues in Security Principles and Trust (HotSpot 2020) (another interesting title). The authors, Viganò and Magazzeni, propose “a new paradigm in security research: Explainable Security (XSec)” and discuss ‘the “Six Ws” of XSec (Who? What? Where? When? Why? and How?)’. So that explains part of the mystery of Semantic Scholar’s title for the paper, but it is still a mystery.

I was informed that Error Function Learning with Interpretable Compositional Networks for Constraint-Based Local Search cited two of my papers, In Pursuit of the Holy Grail and Progress Towards the Holy Grail. Semantic Scholar’s listing of the first paper in the References points to an entry that itself claims only 10 citations. That seemed low to me, and then I realized it was the version of the paper in ACM Computing Surveys. The version in the Constraints journal, which is the one the paper actually cites, has 59 citations. Semantic Scholar includes the second paper three times in the References, but twice in odd ways, which if hovered over declare that the paper is not yet in Semantic Scholar’s corpus.

In fact, the paper cites three of my papers; it also cites Holy Grail Redux. However, Semantic Scholar thinks Holy Grail Redux was authored by “S. Nadis” so I was not notified of this citation. Perhaps S. Nadis was and is wondering why. When you click through on Holy Grail Redux in Semantic Scholar’s list of References for this paper you arrive at a listing for Holy Grail Redux where the “Topics from this paper” include only “Bad Mojo”, and when you click on View via Publisher, you end up here!

Taking Advantage of Stable Sets of Variables in Constraint Satisfaction Problems, which I co-authored with Mike Quinn was cited by two papers. In Neural Regret-Matching for Distributed Constraint Optimization Problems, Yanchen Deng , Runshen Yu , Xinrun Wang and Bo An are “incorporating deep neural networks in solving DCOPs for the first time”. In Utility distribution matters: enabling fast belief propagation for multi-agent optimization with dense local utility function Yanchen Deng and Bo An: “theoretically show the correctness and superiority of our proposed algorithms”.

Finally, Semantic Scholar informed me that Exploring Automatic Fitness Evaluation for Evolutionary Typesetting cited a paper, Creating Personalized Documents: An Optimization Approach, which I wrote with Lisa Purvis, Steven Harrington and Barry O’Sullivan. In their paper Sérgio M. Rebelo, Tiago Martins, J. Bicker and P. Machado “present an evolutionary system for the automatic typesetting of typographic posters”. In our paper we “formalized custom document creation as a multiobjective optimization problem, and use a genetic algorithm to assemble and transform compound personalized documents”. Unfortunately, while Semantic Scholar knew enough to alert me to the citation, the one reference it lists at the citing paper’s webpage is not our paper.

I received the first notice from Semantic Scholar on June 20th and the last on June 25th, so 6 citations for 6 papers in 6 days. Just a nice coincidence, not any kind of record to be sure. Google Scholar says that Geoffrey Hinton received 85,082 citations in 2020, which works out to approximately 233 per day. Well actually “only” approximately 232, because 2020 was a leap year.

It did feel like a long year, didn’t it.

Ban the Bullets

Presentations are often full of bullet point slides. Ostensibly this is to help the audience identify and recall the main points of the talk. In practice, the bullet points serve as a crutch for the speaker. In the worst case, the talk devolves largely into a verbatim recitation of densely packed bullet points: a guaranteed soporific. In any case, the speaker is competing with the slides. Should the audience members read the bullet points or listen to the speaker? They try to do both, but people are notoriously bad at multi-tasking.

For boring speakers, I may read the bullet points quickly, and if that seems sufficient, I can return to my email until the next slide, while the speaker drones on. For more interesting speakers, I may purposely ignore the bullet points to focus on the speaker. There may be information in the bullet points that the speaker does not address, but rather than worrying about reading it before the slide changes, I can console myself with the knowledge that my memory of the talk will be subject to rapid exponential decay anyway.

So if not bullets, what should be on the slides? I plan to address that in a future post.

Inference and Explanation

Twenty-five years ago Mohammed Sqalli and I wrote a paper titled Inference-Based Constraint Satisfaction Supports Explanation. From the abstract:

Constraint satisfaction problems are typically solved using search, augmented by general purpose consistency inference methods. This paper proposes a paradigm shift in which inference is used as the primary problem solving method, and attention is focused on special purpose, domain specific inference methods. While we expect this approach to have computational advantages, we emphasize here the advantages of a solution method that is more congenial to human thought processes.

“Domain specific” may be misleading here. While the inference methods we introduced were motivated by our “case study” — logic puzzles — they involved “global constraints”, which were not tied solely to that specific domain. Introducing global constraints is something of a cottage industry, and there is a Global Constraint Catalog, which currently has hundreds of specialized constraints and many pointers to associated specialized filtering (inference) methods.

Nicolas Beldiceanu and Helmut Simonis developed tools to automate the use of the Catalog: Constraint Seeker, which provided “a web interface to search for global constraints in the global constraint catalog, given positive and negative, fully instantiated (ground) examples”, and Model Seeker, which generated “finite domain constraint models from positive example solutions, for highly structured problems”. Model Seeker “is based on the global constraint catalog, providing the library of constraints that can be used in modeling, and the Constraint Seeker tool”.

Explanation, of course, is a hot topic now in AI — XAI — in particular with regard to combating bias. There has been considerable discussion of a “right to explanation“.

This post was prompted, in particular, by a recent paper on explanation, Efficiently Explaining CSPs with Unsatisfiable Subset Optimization, by Emilio Gamba, Bart Bogaerts, and Tias Guns, which cites Mohammed and me, as well as work I did with Chavalit Likitvivatanavong, and Richard Wallace. This paper builds upon an ECAI-20 paper Step-wise Explanations of Constraint Satisfaction Problems (see the video), which in turn was motivated by the winning entry in the PTHG-19 Workshop‘s Holy Grail Challenge (see the workshop talk). From the abstract:

We build on a recently proposed method for explaining solutions of constraint satisfaction problems. An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used, and where the sequence explains all logical consequences of the problem. We build on these formal foundations and tackle two emerging questions, namely how to generate explanations that are provably optimal (with respect to the given cost metric) and how to generate them efficiently.

Monarch of All I Survey

Kostas Stergiou has published in Artificial Intelligence Review a very interesting “review and evaluation” of adaptive constraint propagation in constraint satisfaction. It surveys, classifies, and advances the subject, and suggests directions for further work.

With the current prominence of machine learning, I was happy to see that Stergiou unearthed an old paper Rick Wallace and I wrote on Selective Relaxation for Constraint Satisfaction Problems, which might be regarded as including an early application of machine learning to CP. That presupposes a rather generous definition of “learning”, which might even apply to another of the cited papers, one I wrote with Dan Sabin on Understanding and Improving the MAC Algorithm. “Adaptation” might be a better term. A more straightforward example of learning is the paper cited that I wrote with Susan Epstein, Rick Wallace and Xingjian Li on Learning Propagation Policies. Kostas concludes his proposals for future work with a timely call for exploring the further potential of machine learning. (Incidentally, I ran across a paper on ArXiv recently that applys deep learning to the related issue of variable ordering.)

I’d like to see more survey papers being published. It occurs to me that every thesis will include the makings of such a paper in a “related work” section or chapter. I would encourage students not just to publish papers based on the novel results in their thesis, but to publish a survey article based on their thesis work as well. The Constraints journal even has a “Surveys Editor”, currently Pascal Van Hentenryck.

I have added a link to Kostas’ paper in the “Surveys/Overviews/Bibliographies” section of my Resources website. If you publish a new survey, or have published one that is not already linked to at the website, let me know so I can link to it.

Tic Tac Toe

For a junior high school science fair I built a machine that played Tic Tac Toe — well actually “Reverse Tic Tac Toe”: the first player to get three in a row loses. It played against a human opponent. It worked. My guess is that the best a human opponent could do was tie.

My shop teacher helped me build a big box that stood on end to house the contraption. On the back were a bunch of wires and switches. On the front were light bulbs arranged in a Tic Tac Toe grid. I moved a switch — a strip of metal — to enter the opponent’s move, and a bulb lit up indicating the machine’s move.

This would have been in the late 50’s. Shannon’s paper on “Programming a Computer to Play Chess” was only published in Philosophical Magazine in 1950 (though it was first presented at the National IRE Convention a year earlier). The “birth” of AI at Dartmouth was in 1956. So a ’thinking machine’ at a junior high science fair in the late 50’s makes me pretty proud of my younger self. I’d be more impressed if I thought I came up with the idea from scratch, but I suspect I got the idea from a suggested project in a library book.

I made a crucial bad design decision though. The switches to enter the opponent’s moves were on the back of the box, and I stood behind the box to work the switches when the opponents told me their moves. That naturally gave rise to the assumption/suspicion that I was actually just making the moves for the machine myself. Maybe that’s why I didn’t get first prize. 🙂

So, the moral? Presentation may be as important as implementation. 🙂

Gimmicks

Coming on stage in 1973 to deliver a talk in connection with receiving the IJCAI Computers and Thought Award, Patrick Winston brought with him a big, white cube. The “blocks world” was a popular “mini-world” at the time for studying AI, which Patrick had used in his thesis on machine learning (video), so this cube was a natural “visual aid”. It rose to the level of a “gimmick” when Patrick took a bite out of the big, white cube. It was made of angel food cake.

I remember that talk almost 50 years later. And I wasn’t even there; I heard about it second hand. When I gave an invited talk at a AAAI conference, naturally involving constraint satisfaction, I began by playing into the microphone the Rolling Stones song “(I Can’t Get No) Satisfaction“. This was before smart phones and Spotify, so I went to some trouble to reliably produce that effect, purchasing a handheld digital recorder. If anyone remembers anything from that talk, they remember that song.

I once gave a talk on a problem decomposition technique where the talk was explicitly patterned after Ron Popeil’s iconic pitch for the Veg-O-Matic (“it slices, it dices”). (When you think about it, we can learn a lot about effective communication from late night infomercials; they certainly understand enthusiasm.) Years later I chatted with someone who spontaneously recalled my use of that gimmick. Ian Gent used juggling to illustrate the phase transition in a talk I’ll always remember. (He later reprised his juggling act for a YouTube video.)

People hear talks all the time. At a conference they hear dozens in a short span of time. It is worth thinking about how you can make yours stand out.