Author Archives: Gene Freuder

Unknown's avatar

About Gene Freuder

Emeritus Professor of Computer Science. AAAS, AAAI, ECAI Fellow. Member of the Royal Irish Academy.

Me and ChatGPT

ChatGPT has been all over the new recently. Here is a recent “chat” I had with it:

Me: Generate a variable ordering heuristic for constraint satisfaction problems that has not been tried before.

ChatGPT: One possible variable ordering heuristic for constraint satisfaction problems that has not been tried before is based on the concept of information entropy. Information entropy is a measure of the amount of uncertainty or randomness in a system.

To apply this heuristic, we can first calculate the information entropy of each variable in the constraint satisfaction problem. This can be done by considering the number of possible values that each variable can take and the constraints that are imposed on it. For example, if a variable has a large number of possible values and few constraints, it will have a high level of entropy. On the other hand, if a variable has a small number of possible values and many constraints, it will have a low level of entropy.

Once the entropy of each variable has been calculated, we can order the variables according to their entropy. Specifically, we can assign higher priority to variables with lower entropy, as they will have fewer possibilities and be more constrained. This can help reduce the search space and potentially lead to a more efficient solution.

It is worth noting that this heuristic is just one possible approach and may not always be effective in practice. It is always important to carefully evaluate the performance of any variable ordering heuristic and consider how it may be improved or modified to better fit the specific needs of a particular problem.

Me: Isn’t this just the well known domain over degree approach?

ChatGPT: The domain over degree heuristic is a commonly used variable ordering heuristic for constraint satisfaction problems that aims to reduce the search space by prioritizing variables with fewer possible values (also known as a smaller domain) over those with a larger domain. This is similar to the approach I described, which involves ordering variables according to their entropy, which is also a measure of the number of possible values that a variable can take.

However, there are some key differences between these two approaches. The domain over degree heuristic only considers the number of possible values for each variable and does not take into account the constraints that are imposed on the variable. In contrast, the entropy-based heuristic takes both the number of possible values and the constraints into account when calculating the entropy of each variable.

In addition, the entropy-based heuristic considers the overall level of randomness or uncertainty in the system, while the domain over degree heuristic only looks at the number of possible values for each variable in isolation. This means that the entropy-based heuristic may be able to capture additional information about the structure of the problem and potentially lead to more efficient solutions.

Overall, while the domain over degree heuristic and the entropy-based heuristic are similar in some ways, they differ in their approaches to ordering variables and may have different performance characteristics depending on the specific problem being solved.

I posted this exchange at the Constraints Google Group and Juan Francisco Diaz posted that he approached the originality question by asking GPT:

Do you know papers talking about the entropy-based heuristic approach?

and received the reply:

Yes, there are several papers that discuss the entropy-based heuristic approach for variable ordering in constraint satisfaction problems. Here are a few examples:

  1. “Entropy-based variable and value ordering heuristics for constraint satisfaction problems” by Richard E. Korf and Andrew J. Fellegi (1993) presents an entropy-based variable ordering heuristic and an entropy-based value ordering heuristic, and evaluates their performance on a range of constraint satisfaction problems.
  2. “A comparison of heuristics for the maximum constraint satisfaction problem” by Peter van Beek and Peter van der Wal (1995) compares the performance of the entropy-based heuristic with other variable ordering heuristics on the maximum constraint satisfaction problem, which is a variant of the constraint satisfaction problem in which the goal is to find an assignment that satisfies the maximum number of constraints.
  3. “Efficient entropy-based variable ordering heuristics for graph coloring problems” by Jia-Huai You and T. K. Satyanarayana (2003) presents an entropy-based variable ordering heuristic for graph coloring problems and compares its performance with other heuristics on a variety of graphs.

These papers may provide more detailed information about the entropy-based heuristic approach and its applications to constraint satisfaction problems.

ChatGPT

Register for ChatGPT and ask it “What is a constraint satisfaction problem?”

Then ask it to “Write a program to solve the N-Queens Problem.”

Then start looking for another line of work. 🙂

No, seriously, instead start thinking about how we can use this kind of technology for “progress towards the Holy Grail“. In particular, how we can progress beyond what this technology can do on its own, by working on “bridging CP and ML“. 

Scrabble

I posted the “Puzzles” blog entry to LinkedIn and Supriyo SB Chatterjee responded by noting that “solving for Scrabble is a CSP too”. (Some time ago Supriyo won the New England Regional Colleges Scrabble Championships.) Indeed it is, an interesting kind of CSP.

When it is your move you need to make a choice that satisfies a number of constraints. You can only use letters from your tray (which you then replace). You have to form a word in the Scrabble Dictionary. You have to place the word to link up properly with what is already on the board. And your move changes the board (and your tray), so you have a “sequence” of problems to solve, where each problem in the sequence is dependent on your solution to the previous one.

I thought of calling this a “sequential CSP”. Of course, there have been many variants of CSP, including some with some form of “sequential” nature. Sequentially Dependent Meta-Constraint Satisfaction Problem: An Application to Video Games seems particularly relevant here. I thought of the acronym “SCSP”, but it has already been used at least twice, for Semiring-based CSP and for Structural CSP. So then I thought “SQCSP”, even though that risks confusion with Quantified CSPs. But there is the Semi-Qualitative Constraint Satisfaction Problem (SQCSP).

Of course, there is another aspect of Scrabble: it is a game, with two adversaries. So actually the adversarial constraint satisfaction paradigm seems relevant here. Looking back I see that Ken, James, Paidi and I didn’t actually use the “ACSP” acronym when we wrote up adversarial constraint satisfaction for ECAI 2004. Perhaps just as well: “a CSP”?

So one outcome of this was the thought that it could be useful to have some kind of list, taxonomy or categorization of the many variants of constraint satisfaction problems. Perhaps even a survey. I’ve posted on the Constraints Google Group asking people to provide pointers to CSP variants and hope to assemble a list to make available at the constraints Resources site.

Back to Scrabble, beyond just finding moves that satisfies the constraints, there are interesting issues of optimization and strategy. For example, do you want to go for a lower value move now to try to set up a high value one later or to prevent your opponent from using a potentially valuable position? I imagine there are books on Scrabble strategy.

Scrabble certainly has been computerized. In a match held way back in 1998, MAVEN defeated World Champion Joel Sherman and runner-up Matt Graham, playing together. But perhaps Scrabble can still inspire new ways of thinking about constraint satisfaction.

Puzzles

The latest puzzle craze is Wordle. Of course, it is a CSP (constraint satisfaction problem), see here. As was the Sudoku puzzle craze, see here. As are so many puzzles, from crossword puzzles to logic puzzles.

If you think this is all just fun and games, check out this paper on Fast and Flexible Protein Design Using Deep Graph Neural Networks. From the summary:

“Protein structure and function is determined by the arrangement of the linear sequence of amino acids in 3D space. We show that a deep graph neural network, ProteinSolver, can precisely design sequences that fold into a predetermined shape by phrasing this challenge as a constraint satisfaction problem (CSP), akin to Sudoku puzzles.” 

Photo: Josh Wardle (in The Guardian)

Congratulation to Barry O’Sullivan

Congratulations to Barry O’Sullivan, who has been elected a Fellow of the Association for the Advancement of Artificial Intelligence!

“Each year the AAAI recognizes a group of individuals who have made significant, sustained contributions to the field of artificial intelligence through the continuation of its Fellows program.

The AAAI Fellows Program, as originally chartered, honors a small percentage of the AAAI membership. Fellows are recognized as having unusual distinction in the profession.”

Barry, from the School of Computer Science & IT, University College Cork, Ireland, will be the first entry in the list of Elected AAAI Fellows noting an Irish affiliation at the time of election.

Queens

Quanta has an article “Mathematician Answers Chess Problem About Attacking Queens“. I thought at first that the headline was a bit strange as we in Constraint Programming, and others, have been solving the n-Queens problem for many years now: how to place n queen chess pieces on an nxn board such that they don’t attack each other. However, the headline is referring not to the problem of finding solutions per se, but to the question of how many solutions there are for any n, for which there has not been any precise answer (other than finding them all). Michael Simkin of Harvard hasn’t actually found a precise answer either, but he has apparently made good progress on narrowing it down. According to the Quanta article “Simkin proved that for huge chessboards with a large number of queens, there are approximately (0.143n)n configurations”.

This also reminded me that there are ways of constructing solutions for the n-Queens problem without search, e.g. Constructions for the Solution of the m Queens Problem, which I conveniently ignore because the problem of searching for a solution has been so useful for both teaching and research in CP. More generally, one can go down quite a rabbit hole with the n-Queens problem, not surprising for a problem that was first published, as the 8-Queens problem, in 1848, and studied by Gauss. There is an n-Queens bibliography online with 336 entries.

Abstracting methods developed for a specific application is, of course, a good “meta-heuristic” for seeking new, more general methods for Constraint Programming. For well-studied problems like n-Queens or graph coloring, I suspect that there is a good deal that is still lurking in the older literature, or appearing in new publications, that could be mined to advance CP.

I Spy With My Little Eye

A golden age of surveys/overviews may be upon us. I’ve just added three new items to the Surveys, Overviews, Bibliographies section of the Constraints Resources site. And they deal with the hot AI topics of learning and explanation.

Explanation in Constraint Satisfaction: A Survey. Sharmi Dev Gupta, Begum Genc and Barry O’Sullivan. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence

End-to-End Constrained Optimization Learning: A Survey. James Kotary, Ferdinando Fioretto, Pascal Van Hentenryck and Bryan Wilder. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence

An overview of machine learning techniques in constraint solving. Andrei Popescu, Seda Polat-Erdeniz, Alexander Felfernig, Mathias Uta, Müslüm Atas, Viet-Man Le, Klaus Pilsl, Martin Enzelsberger & Thi Ngoc Trang Tran.  Journal of Intelligent Information Systems

Opportunities

This looks like a great research program that Ferdinando Fioretto is advertising a Ph.D. position for:

A Ph.D. opening is available for candidates interested in the intersection of deep learning and combinatorial optimization. The position will start in January 2022. Topics of interest include: – Supervised Learning for speeding up the resolution of constrained optimization problems; – Reinforcement Learning for dynamic and combinatorial problems; – Graph Neural Network models for graph-structured constrained optimization problems; – Theoretical guarantees for ML-enhanced constraint optimizers; – Physics informed deep learning; The project will combine fundamental aspects of optimization, constrained reasoning, and learning to develop integrated optimization and learning systems. The ideal candidate will have a strong background in mathematics and optimization theory and a strong interest in machine learning and constraint reasoning. Students who majored in Computer Science, Mathematics, Statistics, or Physics are welcome to apply. An MS degree and/or publications in leading international venues will be an advantage. TO APPLY: Applications should be submitted at ffiorett@syr.edu and candidates should include their statement of purpose, resume, and transcript (if available).

Similarly for this assistant professor opening at TU Eindhoven:

As a successful applicant, you will work on the interface between Artificial Intelligence and decision-making methods. Examples of research directions include but are not limited to: using machine learning (such as predictive modelling and reinforcement learning) to find better solutions to optimization problems, developing decision support tools that combine data-driven models with knowledge-based models. With an embedding of the position in the school of Industrial Engineering, special attention will be paid to applications of the methods and tools in domains such as logistics, transportation, service industries, high-tech manufacturing, and healthcare.

Great to see these positions that resonate with the Pursuit of the Holy Grail.

Meta

Looking for papers on explanation for constraint satisfaction I ran across Meta-explanation in a Constraint Satisfaction Solver by the pioneering French AI scientist Jacques Pitrat. From the abstract:

Meta-explaining is useful to a human or an artificial researcher in order to learn how to use its knowledge efficiently. For instance, when the system RESEARCHER solves the same prob- lem using two different methods, it can meta-explain why it finds a more elegant solution with one of them. 

It might be interesting to compare Pitrat’s work here to the work I did with Chavalit Likitvivatanavong and Rick Wallace in Deriving Explanations and Implications for Constraint Satisfaction Problems.

Pitrat’s paper is related to a very ambitious — and very “meta” — research agenda that he was pursuing. See A Step toward an Artificial Artificial Intelligence Scientist, Artificial Beings: The Conscience of a Conscious Machine, and his blog. Unfortunately Pitrat passed away recently; the record of a day of tribute can be found here