CONSTRAINTS

Evaluating the impact of AND/OR search on 0-1 integer linear programming
Marinescu R and Dechter R
AND/OR search spaces accommodate advanced algorithmic schemes for graphical models which can exploit the structure of the model. We extend and evaluate the depth-first and best-first AND/OR search algorithms to solving 0-1 Integer Linear Programs (0-1 ILP) within this framework. We also include a class of dynamic variable ordering heuristics while exploring an AND/OR search tree for 0-1 ILPs. We demonstrate the effectiveness of these search algorithms on a variety of benchmarks, including real-world combinatorial auctions, random uncapacitated warehouse location problems and MAX-SAT instances.
"Almost-stable" matchings in the Hospitals / Residents problem with Couples
Manlove DF, McBride I and Trimble J
The Hospitals / Residents problem with Couples (hrc) models the allocation of intending junior doctors to hospitals where couples are allowed to submit joint preference lists over pairs of (typically geographically close) hospitals. It is known that a stable matching need not exist, so we consider min bp hrc, the problem of finding a matching that admits the minimum number of blocking pairs (i.e., is "as stable as possible"). We show that this problem is NP-hard and difficult to approximate even in the highly restricted case that each couple finds only one hospital pair acceptable. However if we further assume that the preference list of each single resident and hospital is of length at most 2, we give a polynomial-time algorithm for this case. We then present the first Integer Programming (IP) and Constraint Programming (CP) models for min bp hrc. Finally, we discuss an empirical evaluation of these models applied to randomly-generated instances of min bp hrc. We find that on average, the CP model is about 1.15 times faster than the IP model, and when presolving is applied to the CP model, it is on average 8.14 times faster. We further observe that the number of blocking pairs admitted by a solution is very small, i.e., usually at most 1, and never more than 2, for the (28,000) instances considered.
A collection of Constraint Programming models for the three-dimensional stable matching problem with cyclic preferences
Cseh Á, Escamocher G, Genç B and Quesada L
We introduce five constraint models for the 3-dimensional stable matching problem with cyclic preferences and study their relative performances under diverse configurations. While several constraint models have been proposed for variants of the two-dimensional stable matching problem, we are the first to present constraint models for a higher number of dimensions. We show for all five models how to capture two different stability notions, namely weak and strong stability. Additionally, we translate some well-known fairness notions (i.e. sex-equal, minimum regret, egalitarian) into 3-dimensional matchings, and present how to capture them in each model. Our tests cover dozens of problem sizes and four different instance generation methods. We explore two levels of commitment in our models: one where we have an individual variable for each agent (individual commitment), and another one where the determination of a variable involves pairing the three agents at once (group commitment). Our experiments show that the suitability of the commitment depends on the type of stability we are dealing with, and that the choice of the search heuristic can help improve performance. Our experiments not only brought light to the role that learning and restarts can play in solving this kind of problems, but also allowed us to discover that in some cases combining strong and weak stability leads to reduced runtimes for the latter.
Fast and parallel decomposition of constraint satisfaction problems
Gottlob G, Okulmus C and Pichler R
Constraint Satisfaction Problems (CSP) are notoriously hard. Consequently, powerful decomposition methods have been developed to overcome this complexity. However, this poses the challenge of actually computing such a decomposition for a given CSP instance, and previous algorithms have shown their limitations in doing so. In this paper, we present a number of key algorithmic improvements and parallelisation techniques to compute so-called Generalized Hypertree Decompositions (GHDs) faster. We thus advance the ability to compute optimal (i.e., minimal-width) GHDs for a significantly wider range of CSP instances on modern machines. This lays the foundation for more systems and applications in evaluating CSPs and related problems (such as Conjunctive Query answering) based on their structural properties.
Computing relaxations for the three-dimensional stable matching problem with cyclic preferences
Cseh Á, Escamocher G and Quesada L
Constraint programming has proven to be a successful framework for determining whether a given instance of the three-dimensional stable matching problem with cyclic preferences (3dsm-cyc) admits a solution. If such an instance is satisfiable, constraint models can even compute its optimal solution for several different objective functions. On the other hand, the only existing output for unsatisfiable 3dsm-cyc instances is a simple declaration of impossibility. In this paper, we explore four ways to adapt constraint models designed for 3dsm-cyc to the maximum relaxation version of the problem, that is, the computation of the smallest part of an instance whose modification leads to satisfiability. We also extend our models to support the presence of costs on elements in the instance, and to return the relaxation with lowest total cost for each of the four types of relaxation. Empirical results reveal that our relaxation models are efficient, as in most cases, they show little overhead compared to the satisfaction version.
Perception-based constraint solving for sudoku images
Mulamba M, Mandi J, Mahmutoğulları Aİ and Guns T
We consider the problem of , where part of the problem specification is provided through an image provided by a user. As a pedagogical example, we use the complete image of a Sudoku grid. While the rules of the puzzle are assumed to be known, the image must be interpreted by a neural network to extract the values in the grid. In this paper, we investigate (1) combining machine learning and constraint solving for , knowing that blank cells need to be both predicted as being blank and filled-in to obtain a full solution; (2) the effect of on joint inference; and (3) how to deal with cases where the constraints of the reasoning system are not satisfied. More specifically, in the case of handwritten in the image, a naive approach fails to obtain a feasible solution even if the interpretation is correct. Our framework human mistakes by using a constraint solver and helps the user to these mistakes. We evaluate the performance of the proposed techniques on images taken through the Sudoku Assistant Android app, among other datasets. Our experiments show that (1) joint inference can correct classifier mistakes, (2) overall calibration improves the solution quality on all datasets, and (3) estimating and discriminating between user-written and original visual input while reasoning makes for a more robust system, even in the presence of user errors.
Learning and fine-tuning a generic value-selection heuristic inside a constraint programming solver
Marty T, Boisvert L, François T, Tessier P, Gautier L, Rousseau LM and Cappart Q
Constraint programming is known for being an efficient approach to solving combinatorial problems. Important design choices in a solver are the , designed to lead the search to the best solutions in a minimum amount of time. However, developing these heuristics is a time-consuming process that requires problem-specific expertise. This observation has motivated many efforts to use machine learning to automatically learn efficient heuristics without expert intervention. Although several generic are available in the literature, the options for are more scarce. We propose to tackle this issue by introducing a generic learning procedure that can be used to obtain a value-selection heuristic inside a constraint programming solver. This has been achieved thanks to the combination of a algorithm, a tailored , and a . Experiments on , , , and problems show that this framework competes with the well-known impact-based and activity-based search heuristics and can find solutions close to optimality without requiring a large number of backtracks. Additionally, we observe that fine-tuning a model with a different problem class can accelerate the learning process.