Lec07 Constraintsatisfaction
Lec07 Constraintsatisfaction
Lec07 Constraintsatisfaction
Artificial Intelligence
• Each domain Di consists of a set of allowable values, {v1, . . . , vk} for variable Xi.
• Each constraint Ci consists of a pair {scope, rel }, where scope is a tuple of variables that participate
in the constraint and rel is a relation that defines the values that those variables can take on.
• A relation can be represented as an explicit list of all tuples of values that satisfy the constraint, or as an
abstract relation that supports two operations.
• If X1 and X2 both have the domain {A,B}, then the constraint saying the two variables must have different
values can be written as <(X1,X2), [(A,B), (B,A)]> or as <(X1,X2),X1≠X2>
Constraint Satisfaction Problems
• In CSP:
• A state is defined by variables Xi with values from domain Di
• A goal test is a set of constraints specifying allowable combinations of values for subsets of
variables
• CSP allows useful general-purpose algorithms with more power than standard search algorithms
Constraint Satisfaction Problems
• Each state in a CSP is defined by an assignment of values to some or all of the variables,
{Xi=vi,Xj= vj , . . .}.
• An assignment that does not violate any constraints is called a consistent (or legal) assignment.
• A complete assignment is an assignment in which every variable is assigned.
• A solution to a CSP is a consistent, complete assignment.
• A partial assignment is one that assigns values to only some of the variables.
• In order to solve a CSP, a consistent complete assignment must be found (be searched).
Example: Map-Coloring
• We are given the task of coloring each region of Australia either red, green, or blue in such a way that
no neighboring regions have the same color.
• To formulation as a CSP,
Variables: Each region is a variable: X = {WA,NT,Q,NSW, V,SA, T} .
Domains: The domain of each variable is the set Di = {red , green, blue}.
Constraints require neighboring regions to have distinct colors.
• Since there are nine places where regions border, there are nine constraints:
C = {SA ≠ WA, SA ≠ NT, SA ≠ Q, SA ≠ NSW, SA ≠ V,
WA ≠ NT, NT ≠ Q,Q ≠ NSW, NSW ≠ V } .
Example: Map-Coloring
• The simplest kind of CSP involves variables that have discrete, finite domains.
• Map-coloring problems, scheduling with time limits and 8-queens problem are finite-domain CSP.
• A discrete domain can be infinite: the set of integers or the set of strings
• With infinite domains, constraints cannot be enumerated by all allowed combinations of values.
• With infinite domains, a constraint language must be used to describe constraints such as T1+d1≤T2
directly, without enumerating the set of pairs of allowable values for (T1, T2).
• Linear constraints on integer variables are solvable.
• No algorithm exists for solving general nonlinear constraints on integer variables.
• Continuous-domain CSPs with linear constraints are solvable in polynomial time by linear
programming methods.
Variations on the CSP formalism
types of constraints
• In a Cryptarithmetic puzzle, each letter stands for a distinct digit; the aim is to find a substitution of
digits for letters such that the resulting sum is arithmetically correct.
• The global constraint Alldiff (F, T,U,W,R,O) represents distinction of each letter.
• The constraints on the four columns of the puzzle can be written as n-ary constraints.
where C10, C100, and C1000 are auxiliary variables representing the digit carried over into the tens,
hundreds, or thousands column.
Example: Cryptarithmetic
• Assignment problems
• e.g., who teaches what class
• Timetabling problems
• e.g., which class is offered when and where?
• Hardware configuration
• Transportation scheduling
• Factory scheduling
Naïve Formulation:
• This search formulation is the same for all CSPs!
• Every solution appears at depth n with n variables use depth-first search
• There are dn complete assignments for a CSP with n variables of domain size d,.
• Branching factor: nd at the first level and (n-d)h at depth h, hence n!dn leaves!
• It is not so practical.
Commutativity
• A problem is commutative if the order of application of any given set of actions has no effect on the
outcome.
• CSPs are commutative because when assigning values to variables, we reach the same partial
assignment regardless of order.
• We need only consider a single variable at each node in the search tree.
• For example, at the root node of a search tree for coloring the map of Australia, we might make a choice
between SA=red, SA=green, and SA=blue, but we would never choose between SA=red and WA=blue.
• With this restriction (commutative property of CSPs), the number of leaves is dn.
• All CSP search algorithms consider a single variable assignment at a time, thus there are dn leaves.
• The backtracking search for CSPs is a depth-first search that chooses values for one variable at a
time and backtracks when a variable has no legal values left to assign.
• Backtracking search is the basic uninformed algorithm for CSPs.
Backtracking Search
Backtracking Search: Example
Improving Backtracking Efficiency
• The simplest strategy for variable selection is to choose the next unassigned variable in order,
{X1,X2, . . .}.
• This static variable ordering may NOT produce an efficient search.
Degree Heuristic:
• choose the variable with the most constraints on remaining variables
• The minimum-remaining values heuristic is usually a more powerful guide, but the degree heuristic
can be useful as a tie-breaker.
Least-Constraining-Value
a heuristic for value selection
• Once a variable has been selected, the algorithm must decide on the order in which to examine its
values.
• Given a variable, least constraining value heuristic prefers the value that rules out the fewest
choices for the neighboring variables in the constraint graph.
• Minimum Remaining Values (MRV) heuristic picks a variable that is most likely to cause a failure
soon, thereby pruning the search tree.
• MRV also has been called the most constrained variable or fail-first heuristic.
• If some variable X has no legal values left, the MRV heuristic will select X and failure will be detected
immediately—avoiding pointless searches through other variables.
• Least-Constraining-Value heuristic picks a value that is most likely to cause a failure last.
• We only need one solution; therefore it makes sense to look for the most likely values first.
• If we wanted to enumerate all solutions rather than just find one, then value ordering would be irrelevant.
Interleaving Search and Inference
• Some inference algorithms can infer reductions in the domain of variables before we begin the
search
• Inference algorithms can infer new domain reductions on the neighboring variables every time we
make a choice of a value for a variable.
Inference: Forward Checking
• Whenever a variable X is assigned, the forward-checking process establishes arc consistency for it:
• For each unassigned variable Y that is connected to X by a constraint, delete from Y ’s domain any value
that is inconsistent with the value chosen for X.
• Because forward checking only does arc consistency inferences, there is no reason to do forward checking
if we have already done arc consistency as a preprocessing step.
• For many problems the search will be more effective if we combine the MRV heuristic with forward
checking.
• Forward Checking Idea: Keep track of remaining legal values for unassigned variables and
terminate search when any variable has no legal values.
Forward Checking
Initial domains
Forward Checking
Initial domains
After WA=red
Forward Checking
Initial domains
After WA=red
After Q=green
Forward Checking
Initial domains
After WA=red
After Q=green
After V=blue
Constraint Propagation
• Forward checking propagates information from assigned to unassigned variables, but doesn't
provide early detection for all failures:
• X → Y is consistent iff
for every value x of X there is some allowed y for Y
• A network is arc-consistent if every variable is arc consistent with every other variable.
Arc Consistency
X → Y is consistent iff
for every value x of X there is some allowed y for Y
Arc Consistency
X → Y is consistent iff
for every value x of X there is some allowed y for Y
Arc Consistency
X → Y is consistent iff
for every value x of X there is some allowed y for Y
X → Y is consistent iff
for every value x of X there is some allowed y for Y
• Suppose each CSPi has c variables from the total of n variables, where c is a constant.
• Then there are n/c subproblems, each of which takes at most dc work to solve, where d is the size of
the domain.
• Hence, the total work is O(dc n/c), which is linear in n.
• Without the decomposition, the total work is O(dn), which is exponential in n.
• A constraint graph is a tree when any two variables are connected by only one path (no loops).
• Theorem: If the constraint graph has no loops, the CSP can be solved in O(n d2) time (linear in n)
• Compare to general CSPs, where worst-case time is O(dn)
Algorithm for tree-structured CSPs
Nearly tree-structured CSPs
• We have an efficient algorithm for trees, we can consider whether more general constraint graphs can
be reduced to trees somehow.
General Algorithm:
1. Choose a subset S of the CSP’s variables such that the constraint graph becomes a tree after
removal of S. S is called a cycle cutset.
2. For each possible assignment to the variables in S that satisfies all constraints on S,
(a) remove from the domains of the remaining variables any values that are inconsistent with the
assignment for S, and
(b) If the remaining CSP has a solution, return it together with the assignment for S.
Cutset size c runtime O(dc (.n - c) d2), very fast for small c
Local Search For CSPs
• A two-step solution using min-conflicts for an 8-queens problem. At each stage, a queen is chosen for
reassignment in its column.
• The number of conflicts (number of attacking queens) is shown in each square.
• The algorithm moves the queen to the min-conflicts square, breaking ties randomly.
Summary