Author Archives: Gene Freuder

Unknown's avatar

About Gene Freuder

Emeritus Professor of Computer Science. AAAS, AAAI, ECAI Fellow. Member of the Royal Irish Academy.

Help Yourself by Helping Out

A scientist can be called upon to play many roles. One minute you are designing an experiment for an hypothesis you’re testing, the next you are booking hotels for a conference you’re organizing. For many of these roles you may receive no formal instruction while a student. However, you can learn by experience.

Your professors will be playing these roles, and you can volunteer to help. If one of them is organizing a workshop, perhaps you can help set up the workshop website. If they are preparing a grant proposal, even serving as a proofreader can give you some experience. Be proactive about getting involved. Your involvement may expand, and you may have an opportunity to observe other parts of the process. If your professors are good at playing these roles, you can learn from their successes. If they are bad at it, you can learn from their mistakes. šŸ™‚

You can even initiate opportunities by co-opting your professors as partners. For example, you can ask a professor to lend their name and experience to a proposal for a conference workshop in the area you’re specializing in, where you agree ahead of time to take on the logistical burden.

This kind of experience will help you hit the ground running when take your first independent academic post, and may even help you to obtain it.

One Two Three . . . Infinity

The great physicist George Gamow wrote a book entitled One Two Three . . . Infinity: Facts and Speculations of Science, which I recall as one of the books I encountered in my high school library that inspired my interest in mathematics. This post is what I hope will become one of a series on “Where do ideas come from?”. (As always further contributions from readers in the Comments are encouraged.)

“Consistency”, as a form of inference, is arguably what distinguishes the AI approach to constraint satisfaction. “In the beginning” there were three forms of consistency, which were described in terms of the “constraint graph” representation of constraint satisfaction problems: “node consistency”, “arc consistency” and “path consistency”, which involved respectively nodes (vertices), arcs (edges) and paths (sequences of edges leading from one vertex to another). Nodes, arcs, paths: that seemed to cover the possibilities for graph consistency.

However, Montanari demonstrated that path consistency could be reduced to consistency for paths of length 2, involving just 2 edges. Looking at it in terms of the number of vertices involved: node consistency involves just 1, arc consistency 2, path consistency 3. “Node, arc, path” … it isn’t obvious that there is anything more to see here. But “1, 2, 3” … that practically begs to ask the question: Does it stop there; shouldn’t there be a “4-consistency, a 5-consistency, … ? And thus the idea for “k-consistency” was born. Putting this together with an algorithm for achieving k-consistency (and thus ultimately a complete set of solutions, purely by inference) resulted in an MIT AI Memo in 1976, which in 1978 became my first major journal publication.

There are two meta lessons here with regard to the question “where do ideas come from”.

The first is the importance of representation. A classic example is Kaplan and Simon’s investigation of the mutilated chessboard problem. Here, it was the switch from talking about different graph structures — nodes, arcs, and paths — to talking about the number of vertices involved. Another representational shift — from viewing constraints as represented by edges to viewing them as represented by nodes (in an “augmented” constraint graph) — enabled the algorithm I devised to achieve higher order consistency.

The second is the notion of extrapolation, as an instance of the even broader concepts of generalization and completion. Here “1, 2, 3” suggests “k”. It just doesn’t seem right that we would have to stop at 3. šŸ™‚

Of course, we don’t actually go to infinity — unless we consider problems with an infinite number of variables. Hmm, that’s an idea …

Here, There, and Everywhere

I continue to be struck by the variety of applications found in my Google or Semantic Scholar “alerts” for papers citing mine. Of course, this doesn’t necessarily mean that my work played any serious role in theirs, but since the papers of mine cited almost certainly involve constraint satisfaction it does illustrate the way constraint satisfaction can relate to so many other fields. Some recent examples:

Classification and Acquisition

Classifier-based constraint acquisition, Prestwich, Freuder, O’Sullivan and Browne, Annals of Mathematics and Artificial Intelligence, has been published and is available online.

Modeling a combinatorial problem is a hard and error-prone task requiring significant expertise. Constraint acquisition methods attempt to automate this process by learning constraints from examples of solutions and (usually) non-solutions. Active methods query an oracle while passive methods do not. We propose a known but not widely-used application of machine learning to constraint acquisition: training a classifier to discriminate between solutions and non-solutions, then deriving a constraint model from the trained classifier. We discuss a wide range of possible new acquisition methods with useful properties inherited from classifiers. We also show the potential of this approach using a Naive Bayes classifier, obtaining a new passive acquisition algorithm that is considerably faster than existing methods, scalable to large constraint sets, and robust under errors.

Doing Good

The Operations Research organizations INFORMS and IFORS have initiatives for using technology for good that CP people might want to contribute to:

INFORMS Pro Bono Analytics (PBA) brings together INFORMS members, other analytics practitioners, and nonprofit professionals and organizations working in underserved communities with the goal of enabling these vital nonprofits to begin making data driven decisions. In short, think “helping nonprofits with data” when you think of PBA. Sometimes our work is as complex as building a system to forecast real-time demand for services, and other times it’s as simple as helping an organization choose between analytical tools or organize and store collected data from past fundraising campaigns. Even if your organization is just getting started with data, we can provide advice and assistance on what data to collect, how to collect it, and where to keep it.

The aim of the [IFORS] Developing Countries On-Line Resources page is to offer the OR worker all publicly-available materials on the topic of Operations Research for Development. It also aims to provide a venue for people who are working in the area to share their completed or in-process work, learn from others, and stimulate comments and discussions on the work.

These could be of particular interest to those who participate in the CPAIOR conferences.

ConstrAInts

With the increasing presence and popularity of AI it is important to get the word out that constraint satisfaction is AI. While a strength of the field is its multi-disciplinary nature, it certainly has a ‘home’ — arguably its ‘primary residence’ šŸ™‚ in the AI community. In fact, I’ve argued that constraints are ubiquitous in AI, to the point where they might even serve as a kind of “lingua franca” unifying the subfields of AI. In particular, constraint satisfaction has been used for machine learning and vice versa, and constraints resonate with other currently high profile topics in AI like explanation and human-centric AI.

Members of the constraints community have a very strong presence in the larger AI community. For example, Francesca Rossi, a former president of the ACP, is a past president of IJCAI and future president of AAAI, and Barry O’Sullivan, a former president of the ACP, is a past president of EurAI and a current member of the Executive Council of AAAI.

Microplates, Avenues and Smorgasbords

An entry by Maria Andreina Francisco in a thread at the Constraint Google Group engendered several thoughts, which I thought I’d collect into a blog post. Italics are excerpts from Maria’s entry. 

I’m currently a postdoc at a pharmaceutical bioinformatics group.  

It is always exciting to hear about CP applications to bioinformatics. The last International Workshop on Constraint-Based Methods for Bioinformatics I’m aware of (the 12th, WCB’16) was in 2016. There was one scheduled for 2018, but I’m not sure if it took place, or if there has been one since; does anyone know? Is it time for another? In any case, I’ve added Andreina’s ModRef 2020 contribution on microplate layout design to my nascent page of Sample Applications of CP

I share Alexander’s experience that building a first proof of concept with minizinc is quick and painless, but getting it to work on larger instances with more complex constraints… well, not so quick and painless. 

It appears that there is a research ā€œavenue” opportunity here, which may be difficult to ā€œenter”, but very rewarding if new entrances can be found. Can we conduct some fundamental research addressing the difficulties of practial implementation, especially with regards to ā€œscaling upā€? 

For example, I once heard an observation from a practioner that a perceived drawback of CP for end users was that seemingly small changes to the model could have unpredictably big effects on solution time. Somehow that seems like a problem that could be amenable to some fundamental research. 

Some sort of general comment would be that I think it was easier to write models when I knew less about CP.  Nowadays I imagine a model in my head only to realise that all the pieces are not all available in one solver and I need to decide which solver/parts I want to keep and rethink the rest. 

The idea that it could be easier to write models when you know less about CP, because all the pieces you want aren’t available in one solver, is an interesting take on things. Could a ’smorgasbord’ approach — as opposed to a ā€˜portfolio’ approach šŸ™‚ be implemented that allowed one to pick and choose and assemble the pieces one needed, as needed? Or better yet could one automate this, build a system that analyzed the problem and automatically assembled the pieces to build and operate on an appropriate model? Back to my ā€œHoly Grailā€ hobbyhorse. šŸ™‚

Erdős-Faber-LovĆ”sz

Mathematicians have recently “proven” the Erdős-Faber-LovĆ”sz coloring conjecture. I put “proven” in quotes because the proof only works for “large enough” hypergraphs, a restriction that I think is rather cavalierly addressed in the linked article. Nevertheless, it would be interesting to consider whether some of the techniques used in the proof could be adapted for general constraint satisfaction problem solving. At least one technique they use seems already to be related to a CP staple: “hardest first (fail first)” ordering; perhaps some readers can identify others. Even more interesting would be if some of the methods used in the proof suggest new CP techniques. And it would be exciting if any existing CP techniques suggest alternative or improved proofs of the conjecture.