Abstracts



Tobias Achterberg
Chip Design Verification with Constraint Integer Programming
Formal verification of chip design becomes a more and more important tool in the semiconductor industry. The International Technology Roadmap for Semiconductors 2005 states: "Today, the verification of modern computing systems has grown to dominate the cost of electronic system design ... In fact, in many application domains, the verification portion has become the predominant component of a project development, in terms of time, cost and human resources dedicated to it."

One important task of design verification is the so-called "property checking" for which one has to check whether a given design fulfills a desired property. Current state-of-the-art property checking algorithms operate on the "gate level" description. At this level, the design and the property can be converted into an instance of the satisfiability problem which is then solved by a black box SAT solver.

SAT solvers proved to be very successful for the control part of a design, but they typically fail on data paths with large arithmetic blocks. This motivated the development of word level solvers using integer programming or constraint programming which can be applied at the "register transfer level" where the structure of the arithmetic operations is still visible.

In this talk, we show how IP, CP, and SAT techniques can be integrated into a unified framework which we call "constraint integer programming". A solver for the property checking problem is implemented using our CIP framework "SCIP". The resulting algorithm features LP relaxations and LP-guided branching from IP, sophisticated domain propagation algorithms as in CP, and a generalization of conflict analysis and conflict-driven branching, which are techniques developed in the SAT community. We show on test instances provided by Infineon and OneSpin Solutions that our integrated approach can be superior to SAT on arithmetic circuits.


Fahiem Bacchus
Solving #SAT
(joint work with Toniann Pitassi)
#SAT is the problem of counting the number of solutions to a CNF theory. The important practical problem of Bayesian Inference is a slight generalization of #SAT to weighted model counting. Traditional approaches to this problem employ a dynamic programming technique called variable elimination, and are able to achieve tree-width bounds when supplied with a good variable ordering. In this talk I will demonstrate how the problem can be solved using backtracking search. If caching is employed (in essence adding dynamic programming to backtracking) similar computational bounds can be achieved. If dynamic decompostion is employed in addition to caching, backtracking can be shown to perform a 1-1 emulation of the operations of variable elimination, except in a different order. However, there are a number of advantages to using backtracking, one of which is that different branches can employ different variable orderings. It can be proved that this can yield a super-polynomial speedup over variable elimination (even when variable elimination is supplied with the best possible ordering).


Alexander Bockmayr
Hybrid CP/IP Methods
Integer programming (IP) and constraint programming (CP) are two complementary approaches for modeling and solving combinatorial optimization problems. While some of the underlying concepts are similar, there exist also a number of important differences. Both CP and IP generate a branching tree. Both use inference methods to prune this tree taking advantage of problem structure: cutting planes in the case of IP, and domain filtering in the case of CP. A major difference, however, is that CP associates each constraint with an algorithm that operates on the solution space so as to remove infeasible solutions. This allows CP to exploit substructure in the problem in a way that IP cannot, while IP benefits from strong continuous relaxations that are unavailable in CP. In the first part of our talk, we compare and contrast the basic concepts underlying CP and IP in order to clarify their relationship. In the second part, we present some ideas on how CP may be integrated with MIP in order to combine their complementary strengths.

References:

Bockmayr, A. and Hooker, J. N.: Constraint programming. In Handbooks in Operations Research and Management Science. 12: Discrete Optimization (Eds. K. Aardal, G. Nemhauser, and R. Weismantel), Chapter 10, 559 - 600, Elsevier, 2005.

Bockmayr, A. and Pisaruk, N.: Detecting infeasibility and generating cuts for mixed integer programming using constraint programming. Computers & Operations Research, 33/10, 2777-2786, 2006.


Endre Boros
Probabilistic branching rules in quadratic unconstrained binary optimization
(Joint work with Peter L. Hammer, Gabriel Tavares, and Alex Waldron.)
Standard polynomial time local search procedures focus on greedily improving the objective, but they cannot even guarantee to arrive to a local optimum. We propose a probabilistic branching rule, derived from local optimality conditions, and demonstrate computationally that such rules improve substantially on the performance of one-pass (polynomial time) local search. We propose an extension of this approach by considering a family of equivalent representations of the pseudo-Boolean objective function, such that only global optima satisfy all local optimality conditions.


John Chinneck
Active-Constraint Variable Ordering for Faster Feasibility of Mixed Integer Linear Programs
The selection of the branching variable can greatly affect the speed of branch and bound when solving a mixed-integer or integer linear program. Traditional approaches to branching variable selection rely on estimating the effect of the candidate variables on the objective function. However, the optimum point at the current LP-relaxation solution is determined by the active constraints. Hence our approach instead tries to choose the candidate variable that has the most impact on the active constraints in the current LP relaxation. We apply this method to the problem of finding the first feasible solution as quickly as possible. Experiments demonstrate a significant improvement compared to a state-of-the art commercial MIP solver.


Vašek Chvátal
Resolution Search
An unfortunate branching choice at the root of a branch-and-bound tree can double your work: once you have branched, there is no turning back and you may be doomed to painfully replicate all your subsequent moves. In this sense, branching is like marrying in the Roman Catholic church: Branch in haste, repent at leisure.

To some, the ominous prospect of irrevocable matrimony may supply the motivation for utmost care in choosing a spouse (an example is the prototype of strong branching, first introduced in Concorde, a computer code for the symmetric traveling salesman problem: use first quick and coarse criteria to eliminate some of a large number of initial candidates in a first round, then slower and finer criteria to eliminate some of the remaining candidates in the next round, and so on in an iterative fashion); others may prefer to choose spouses carelessly and to make divorce easy.

Resolution search is one implementation of the latter plan; following its first appearance, N.I. Yanev (Mathematical Reviews 98c:90077) wrote my expectations are that soon ''resolution search'' will become a frequently used keyword; sadly, these expectations have come to naught.

In reviewing resolution search, I will go into its origins, details, and extensions beyond the original presentation in Discrete Appl. Math. 73 (1997), 81 -- 99. In particular, I will point out that resolution search can bridge the gap between certain local search heuristics and so-called exact algorithms.


Joe Culberson
A Dog in the manger Approach
According to Aesop's fable, a dog lay down in a manger. When the cattle came to eat the hay, he barked and snarled to keep them from eating the feed, even though he could not eat it either. On examining my work on search, heuristics and branching, I find that I have made a similar contribution. I spend much of my time looking for classes of instances that neither I nor anyone else is able to solve. In this talk I will present some of the latest work with Yong Gao on designing structures to embed in randomized CSPs to make them hard to solve. In particular we find that embedding appropriate structures in binary constraints make the instances consistent at various levels, and that this is enough to give them exponential resolution complexity. Note that enforcing consistency is one of the tools used to make it easier to solve CSPs. This talk will focus on the design of the structures we use, and the way the structures may interact with the constraint graph. We also present a series of experimental results comparing the effect of the design on the difficulty of the instances. Many of these experiments explore structural effects not at the phase transition.


Michel Gendreau
Accelerating Benders Decomposition by Local Branching
(Joint work with Walter Rei, Jean-François Cordeau, and Patrick Soriano)
We examine how local branching can be used to accelerate the classical Benders decomposition algorithm. By applying local branching throughout the solution process, one can simultaneously improve both the lower and upper bounds. We also show how Benders feasibility cuts can be strengthened or replaced with local branching constraints. To assess the performance of the different algorithmic ideas, computational experiments were performed on a set of network design problems and results illustrate the benefits of this approach.


Carla Gomes
Coin Flipping for Satisfaction and Beyond


Pierre Hansen
Recent results obtained with Primal-Dual VNS algorithms
(Joint work with Jack Brimberg, Dragan Urošević, and Nenad Mladenović)
The variable neighborhood search metaheuristic is applied to the primal simple plant location problem and to a reduced dual obtained by exploiting the complementary slackness conditions. This leads to (i) heuristic resolution of (metric) instances with uniform fixed costs, up to n = 15, 000 users and m = n potential locations for facilities with an error not exceeding 0.04%; (ii) exact solution of such instances with up to m = n = 7, 000; (iii) exact solution of instances with variable fixed costs and up to m = n = 15,000. Recent results on the p-median problem for which instances of size up m = n = 20000 were solved exactly will also be reported.


Alain Hertz
On the use of heuristic methods for exact answers to feasibility problems
(Joint work with Christian Desrosiers and Philippe Galinier.)
A solution of a constraint satisfaction problem (CSP) is an assignment of a value to each variable from its domain such that all constraints are satisfied. A CSP with no solution is said infeasible. A subset S of constraints is called an irreducible infeasible set (IIS) of constraints if no complete assignment satisfies all constraints in S simultaneously, while the CSP induced by S-c has a solution for every c in S. Similarly, an irreducible infeasible set of variables is a subset of variables which is infeasible, but becomes feasible when any one variable is removed. We first show how heuristic methods can help finding IISs of constraints and variables. We then illustrate these algorithms on the k-colouring problem which is to determine if the vertices of a given graph can be coloured with at most k colors, such that any two adjacent vertices have different colours. If such a colouring exists, then the graph is said k-colourable. The chromatic number of a graph G is the smallest k such that G is k-colourable. The detection of IISs can help finding the chromatic number of a graph. Indeed, suppose that no heuristic algorithm is able to determine a k-colouring of a given graph G. One may then suspect that G is not k-colourable. To prove it, it is sufficient to exhibit a partial subgraph G (obtained by removing edges) or an induced subgraph G (obtained by removing vertices and all edges incident to these vertices) which is not k-colourable but which becomes k-colourable as soon as any edge of G or any vertex of G is removed. The edges of G correspond to an IIS of constraints while the vertices of G form an IIS of variables. If G and G have fewer edges and vertices than G, then instead of proving that G is not k-colourable, it is hopefully easier to prove that G or G is not k-colourable.


Brigitte Jaumard
Efficient Search for Golomb Rulers
A Golomb ruler with M marks is defined by a set of M integers so that all pairwise differences are distinct, and such that the largest mark is as small as possible. A second order Golomb ruler is a Golomb ruler such that all pairwise differences of differences are distinct as much as possible. Contruction of Golomb rulers and second order Golomb rulers is a highly combinatorial problem which has applications in, e.g., the design of convolutional self-doubly orthogonal codes (CSO$^2$C). We will discuss various branch-and-bound algorithms that we developed for the efficient search of either Golomb rulers or second order Golomb rulers.


Chu Min Li
Branching and reasoning for satisfiability
Branching and reasoning are among the most effective techniques to solve a SAT problem. The goal of branching rules is to make the search tree as small as possible, and the goal of reasoning is to simplify the current SAT instance as much as possible. On one hand, Liberatore has shown that the problem of how to select the best branching variable to obtain a minimal search tree is likely harder than SAT itself, since the problem is both NP-hard and Co-NP-Hard. On the other hand, numerous inference rules exist for reasoning in a SAT problem to simplify it, but their cost may be larger than the gain obtained in the simplification. Combining branching and reasoning seems to be a promising direction. In this talk, we present some successful combinations of branching and reasoning for SAT and Max- SAT, and pose some questions about the combination.


Jeff Linderoth
Strong(er) Branching for Mixed Integer Programming
Strong branching is a method for selecting a branching entity in mixed integer linear programming. It works by explicitly computing the impact of many different branching choices, and then chooses the best of these on which to base the dichotomy. The talk will begin by describing the concept of strong branching. We next will discuss improvements to the original idea, and we will mention extensions of strong branching to realms outside of mixed integer linear programming. We will conclude by considering a "lookahead" branching method that asks the question: by taking into account the impact of the current branching decision on the bounds of the child nodes two levels deeper than the current node, can better branching decisions be made?


Andrea Lodi
About branching on general disjunctions
(joint work with Livio Bertacco and Matteo Fischetti)
We survey and extend some recent results on branching on constraints in the context of a branch-and-cut algorithm for mixed-integer linear programming. In particular, we focus on branching on general disjunction which can be obtained in various way, e.g., by exploiting Gomory mixed-integer cuts or solving some NP-hard mixed-integer linear programs.


Inês Lynce
Efficient Search for Haplotype Inference
(Joint work with Joao Marques Silva, Ana Sofia Graca and Arlindo Oliveira)
Mutation in DNA is the principal cause for differences among human beings, and Single Nucleotide Polymorphisms (SNPs) are the most common mutations. Hence, a fundamental task is to complete a map of haplotypes (which identify SNPs) in the human population. Associated with this effort, a key computational problem is the inference of haplotype data from genotype data, since in practice genotype data rather than haplotype data is usually obtained.

Different haplotype inference approaches have been proposed, two of which are the most promising in terms of accuracy: the utilization of statistical methods and the utilization of the pure parsimony criterion.

The haplotype inference by pure parsimony (HIPP) problem consists in inferring haplotypes from genotypes s.t. the number of required haplotypes is minimum. Previous approaches to the HIPP problem have focused on integer programming models and branch-and-bound algorithms. In contrast, we proposes the utilization of Boolean Satisfiability (SAT) based approach. The proposed solution entails a SAT model, a number of key pruning techniques, and an iterative algorithm that enumerates the possible solution values for the target optimization problem.

Experimental results, obtained on a wide range of instances, demonstrate that the SAT-based approach can be several orders of magnitude faster than existing solutions for the HIPP problem. Besides being more efficient, the SAT-based approach is also the only capable of computing the solution for a large number of instances. In addition, the SAT-based approach is competitive, in terms of accuracy and efficiency, with the most effective statistical methods.


Karem Sakallah
From Propositional Satisfiability to Satisfiability Modulo Theories
Satisfiability Modulo Theories, or SMT, is the problem of deciding the feasibility of a Boolean combination of predicates from one or more background "theories," such as the theory of uninterpreted functions, the theory of (integer) linear arithmetic, the theory of arrays, etc. SMT formulas are quite expressive and have numerous applications such as hardware and software verification, scheduling, planning, etc. This talk traces the history of SMT from the original works of Nelson-Oppen and Shostak, which were motivated by program verification, to the present which is witnessing remarkable advances in the performance and scalability of SMT solvers leading to their use in a wider range of applications. The current crop of SMT solvers owe their speed and capacity to the powerful Boolean reasoning capabilities of modern backtrack propositional satisfiability (SAT) solvers. Such solvers augment the classical Davis-Putnam-Logemann-Loveland (DPLL) search algorithm with efficient conflict analysis, clause learning, non-choronological backtracking (aka backjumping), as well as "two-watched-literals" unit propagation, adaptive branching, and random restarts. These innovations have allowed modern SAT solvers to routinely tackle propositional formulas containing tens of thousands of variables and millions of clauses. The architecture of most recent SMT solvers involves a tight integration of specialized theory solvers within a DPLL framework. Dubbed DPLL(T), this architecture gives the responsibility of Boolean reasoning to the DPLL-based SAT solver which, in turn, interacts with a solver for theory T through a well-defined interface. The theory solver need only worry about checking the feasibility of conjunctions of theory predicates passed on to it from the SAT solver as it explores the Boolean search space of the formula. For this integration to work well, however, the theory solver must be able to participate in propagation and conflict analysis, i.e., it must be able to infer new facts from already established facts, as well as to supply succinct explanations of infeasibility when theory conflicts arise. In other words, the theory solver must be incremental and backtrackable. The winner of this year's SMT competition (held in August at the Computer-Aided Verification conference) has all of these features, including a clever incremental Simplex algorithm for the theory of linear arithmetic that integrated quite well within the DPLL framework.


Tuomas Sandholm
Information-theoretic approaches to branching in search
(joint work with Andrew Gilpin)
Deciding what to branch on at each node is a key element of search algorithms. We present four families of methods for selecting what question to branch on. They are all information-theoretically motivated to reduce uncertainty in remaining subproblems. In the first family, a good variable to branch on is selected based on lookahead. In real-world procurement optimization, this entropic branching method outperforms default CPLEX and strong branching. The second family combines this idea with strong branching. The third family does not use lookahead, but instead exploits features of the underlying structure of the problem. Experiments show that this family significantly outperforms the state-of-the-art branching strategy when the problem includes indicator variables as the key driver of complexity. The fourth family is about branching using carefully constructed linear inequality constraints over sets of variables.


Tuomas Sandholm
Nogood Learning for Mixed Integer Programming
(joint work with Robert Shields)
Nogood learning has proven to be an effective CSP technique critical to success in today's top SAT solvers. We extend the technique for use in combinatorial optimization problems, as opposed to mere constraint satisfaction. In particular, we examine 0-1 integer programming (0-1 IP). Our technique generates globally valid cutting planes for the 0-1 IP search algorithm from information learned through constraint propagation. Nogoods (cutting planes) are generated not only from infeasibility but also from bounding. All of our techniques are geared toward yielding tighter LP upper bounds, and thus smaller search trees. Experiments suggest that nogood learning does not help in optimization because few cutting planes are generated, and they are weak. We explain why, and identify problem characteristics that affect the effectiveness. We then generalize the technique to mixed-integer programming. Finally, we lay out directions along which the techniques can potentially be made helpful.


Laurent Simon
A Local Search Algorithm for UNSAT
(joint work with G. Audemard (Lens, France))
Local search algorithms are still the best methods for a large number of fundamental SAT problems, despite tremendous progresses gained by complete search algorithms over the last few years. However, their intrinsic limit does not allow them to address UNSAT problems. Ten years ago, this question challenged the community without any answer: was it possible to use local search algorithm for UNSAT formulae? We propose here a first approach addressing this issue that can beat resolution-based complete methods.


Klaus Truemper
Areas of Intelligent Systems: Reasoning, Discretization, and Learning
Several key areas of intelligent systems are surveyed: reasoning at several levels of the polynomial hierarchy, handling discretization, learning empirical models from data, and learning reasoning from feedback. For each area, mathematical results as well as typical applications are discussed.

The talk is largely based on the book "Design of Logic-based Intelligent Systems", Wiley, 2004.


Stefan Voss
Advances in the Use of Optimization Software Class Libraries
Just as metaheuristics provide flexible and extensible methods of solving optimization problems, class libraries provide flexible and extensible building blocks for creating software to implement metaheuristics. In this talk we provide a brief survey to software class libraries and frameworks in the field of metaheuristics. More specifically, we review the goals, challenges and prospects concerning software reuse in the field of metaheuristics and provide pointers to examples from the literature.

Nowadays, advanced heuristic approaches for discrete optimization are usually still implemented from scratch. We discuss some crucial problems that had prevented from having a "heuristics stockroom" with ready-to-use software components available. We describe different approaches that aim for supporting the application of metaheuristics (such as, e.g., tabu search, simulated annealing, the pilot method) for various combinatorial optimization problems including real-world problems. Successful examples of applying optimization software class libraries range from classical combinatorial optimization problems up to parallel approaches as well as real-world online optimization for logistics problems.

Successful as yet unpublished applications and results will be shown regarding the use of a heuristic optimization framework we developed over the years. As recent interest relates to hybridization we investigate simple and yet effective hybridization among several metaheuristics, e.g., reactive tabu search and simulated annealing. [Reactive Tabu Search aims at the automatic adaptation of the tabu list length. The idea is to increase the tabu list length when the tabu memory indicates that the search is revisiting formerly traversed solutions. Once too many repetitions are encountered an escape mechanism constituting a random walk is an essential part of the method. We propose to replace this random walk by a controlled simulated annealing. Excellent results are presented for various combinatorial optimization problems.] Moreover, we consider ideas for look ahead.

In the final part of the presentation we consider the combination of optimization software class libraries with other "means of optimization" notably discrete event simulation as well as mixed integer programming solver.


Toby Walsh
Estimating Search Tree Size
I discuss methods that can be used online to estimate search tree size. These methods have been applied to estimate search tree both in decision problems (like propositional satisfiability) and optimization problems (like the travelsing salesperson problem). I also discuss how such methods can be used to build better branching heuristics.


Weixiong Zhang
Phase Transitions and Backbones in Combinatorial Optimization Problems and Heuristic Search Methods
Phase transition refers to such phenomena of a system (or a combinatorial problem) in which some global properties change rapidly and dramatically when a control parameter changes across a critical value. A simple example of phase transition is that water changes from solid (ice) to liquid (water) and then to gas (steam) as the temperature increases. Backbone of a problem is the fraction of variables that have fixed values among all (optimal) solutions, i.e., they are the most critically constrained variables of the problem. The concepts of phase transitions and backbones have been used to characterize typical-case properties of combinatorial problems and algorithms in recent years. In the first part of the talk, I will discuss phase transitions and backbones of several combinatorial optimization problems (e.g., the TSP, maximum Boolean satisfibility and number partitioning) and phase transitions of the complexity of searching optimal solutions in a tree model of branch-and-bound. In the second part of my presentation, I will describe some novel techniques that exploit phase transitions and backbones for developing heuristic search algorithms for combinatorial optimization problems.




To the workshop page