Artificial Intelligence Unit 3 Ppt

Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

ARTIFICIAL

INTELLIGENCE
Prof. U. M Rane
Artificial Intelligence & Data Science
K. K. Wagh Institute of Engineering Education & Research
Nashik
UNIT- 3

Adversarial search
Adversarial Search in AI
• Adversarial search is a search, where we examine the problem which
arises when we try to plan ahead of the world and other agents are
planning against us.

• In previous topics, we have studied the search strategies which are only
associated with a single agent that aims to find the solution which often
expressed in the form of a sequence of actions.

• But, there might be some situations where more than one agent is searching for
the solution in the same search space, and this situation usually occurs in game
playing.

• The environment with more than one agent is termed as multi-agent


environment, in which each agent is an opponent of other agent and playing
against each other. Each agent needs to consider the action of other agent and
effect of that action on their performance.
Artificial Intelligence
Adversarial Search in AI(Games theory)
• So, Searches in which two or more players with conflicting goals are trying to
explore the same search space for the solution, are called adversarial searches,
often known as Games.
• Games are modeled as a Search problem and heuristic evaluation function,
and these are the two main factors which help to model and solve games
in AI.

Types of Games in AI:


Deterministic Chance Moves

Perfect information Chess, Checkers, go, Backgammon, monopoly


Othello
Imperfect information Battleships, blind, tic-tac- Bridge, poker, scrabble, nuclear
toe war
Artificial Intelligence
Adversarial Search in AI

•Perfect information: A game with the perfect information is that in which agents
can look into the complete board. Agents have all the information about the game,
and they can see each other moves also. Examples are Chess, Checkers, Go, etc.
•Imperfect information: If in a game agents do not have all information about the
game and not aware with what's going on, such type of games are called the game
with imperfect information, such as tic-tac-toe, Battleship, blind, Bridge, etc.
•Deterministic games: Deterministic games are those games which follow a strict
pattern and set of rules for the games, and there is no randomness associated with
them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
•Non-deterministic games: Non-deterministic are those games which have
various unpredictable events and has a factor of chance or luck. This factor of
chance or luck is introduced by either dice or cards. These are random, and each
action response is not fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Artificial Intelligence
Adversarial Search in AI( Game theory)

Zero-Sum Game

• Zero-sum games are adversarial search which involves pure competition.


• In Zero-sum game each agent's gain or loss of utility is exactly balanced by the
losses or gains of utility of another agent.
• One player of the game try to maximize one single value, while other player tries
to minimize it.
• Each move by one player in the game is called as ply.
• Chess and tic-tac-toe are examples of a Zero-sum game.

Artificial Intelligence
Adversarial Search in AI

Zero-sum game: Embedded thinking

The Zero-sum game involved embedded thinking in which one agent or player is
trying to figure out:

•What to do.
•How to decide the move
•Needs to think about his opponent as well
•The opponent also thinks what to do

Each of the players is trying to find out the response of his opponent to their
actions. This requires embedded thinking to solve the game problems in AI.

Artificial Intelligence
Adversarial Search in AI(Game theory)
Formalization of the problem
A game can be defined as a type of search in AI which can be formalized of the
following elements:
•Initial state: It specifies how the game is set up at the start.
•Player(s): It specifies which player has moved in the state space.
•Action(s): It returns the set of legal moves in state space.
•Result(s, a): It is the transition model, which specifies the result of moves in the
state space.
•Terminal-Test(s): Terminal test is true if the game is over, else it is false at any
case. The state where the game ends is called terminal states.
•Utility(s, p): A utility function gives the final numeric value for a game that ends in
terminal states s for player p. It is also called payoff function. For Chess, the
outcomes are a win, loss, or draw and its payoff values are +1, 0, ½. And for tic-tac-
toe, utility values are +10, -10, and +0.
Artificial Intelligence
Adversarial Search in AI
Game tree:
A game tree is a tree where nodes of the
tree are the game states and Edges of the
tree are the moves by players. Game tree
involves initial state, actions function, and
result Function.
Example: Tic-Tac-Toe game tree:
The following figure is showing part of the
game-tree for tic-tac-toe game. Following
are some key points of the game:
•There are two players MAX and MIN.
•Players have an alternate turn and start
with MAX.
•MAX maximizes the result of the game
tree
•MIN minimizes the result. Artificial Intelligence
Mini-Max Algorithm in Artificial Intelligence
•Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory. It provides an optimal move for the player assuming that
opponent is also playing optimally.
•Mini-Max algorithm uses recursion to search through the game-tree.
•Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-
tac-toe, go, and various two-players game. This Algorithm computes the minimax
decision for the current state.
•In this algorithm two players play the game, one is called MAX and other is called MIN.
•Both the players fight it as the opponent player gets the minimum benefit while they get
the maximum benefit.
•Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
•The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
•The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.
Artificial Intelligence
Pseudo-code for MinMax Algorithm

function minimax(node, depth, maximizingPlayer) is


if depth ==0 or node is a terminal node then
return static evaluation of node

if MaximizingPlayer then // for Maximizer Player


maxEva= -infinity
for each child of node do
eva= minimax(child, depth-1, false)
maxEva= max(maxEva,eva) //gives Maximum of the values
return maxEva

else // for Minimizer player


minEva= +infinity
for each child of node do
eva= minimax(child, depth-1, true)
minEva= min(minEva, eva) //gives minimum of the values
return minEva

Artificial Intelligence
Pseudo-code for MinMax Algorithm

function minimax(node, depth, maximizingPlayer) is


if depth ==0 or node is a terminal node then
return static evaluation of node

if MaximizingPlayer then // for Maximizer Player


maxEva= -infinity
for each child of node do
eva= minimax(child, depth-1, false)
maxEva= max(maxEva,eva) //gives Maximum of the values
return maxEva

else // for Minimizer player


minEva= +infinity
for each child of node do
eva= minimax(child, depth-1, true)
minEva= min(minEva, eva) //gives minimum of the values
return minEva

Artificial Intelligence
MinMax Algorithm
Initial call:
Minimax(node, 3, true)
Working of Min-Max Algorithm:

• The working of the minimax algorithm can be easily described using an


example. Below we have taken an example of game-tree which is representing
the two-player game.
• In this example, there are two players one is called Maximizer and other is
called Minimizer.
• Maximizer will try to get the Maximum possible score, and Minimizer will try to
get the minimum possible score.
• This algorithm applies DFS, so in this game-tree, we have to go all the way
through the leaves to reach the terminal nodes.
• At the terminal node, the terminal values are given so we will compare those
value and backtrack the tree until the initial state occurs.
Artificial Intelligence
MinMax Algorithm
Following are the main steps involved in
solving the two-player game tree:

Step-1: In the first step, the algorithm


generates the entire game-tree and apply
the utility function to get the utility values
for the terminal states. In the below tree
diagram, let's take A is the initial state of
the tree. Suppose maximizer takes first
turn which has worst-case initial value = -
infinity, and minimizer will take next turn
which has worst-case initial value =
+infinity.

Artificial Intelligence
MinMax Algorithm
Step 2: Now, first we find the utilities value
for the Maximizer, its initial value is -∞, so
we will compare each value in terminal state
with initial value of Maximizer and
determines the higher nodes values. It will
find the maximum among the all.

o For node D max(-1, -∞) => max(-1,4)=


4
o For Node E max(2, -∞) => max(2, 6)=
6
o For Node F max(-3, -∞) => max(-3,-5)
= -3
o For node G max(0, -∞) => max(0, 7) =
7
Artificial Intelligence
MinMax Algorithm
Step 3: In the next step, it's a turn for minimizer,
so it will compare all nodes value with +∞, and
will find the 3rd layer node values.

o For node B= min(4,6) = 4


o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will


again choose the maximum of all nodes value
and find the maximum value for the root node.
In this game tree, there are only 4 layers, hence
we reach immediately to the root node, but in
real games, there will be more than 4 layers.

For node A max(4, -3)= 4


Artificial Intelligence
MinMax Algorithm
Properties of Mini-Max algorithm:
• Complete- Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
• Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
• Time complexity- As it performs DFS for the game-tree, so the time complexity
of Min-Max algorithm is O(b^m), where b is branching factor of the game-tree,
and m is the maximum depth of the tree.
• Space Complexity- Space complexity of Mini-max algorithm is also similar to
DFS which is O(bm).

Limitation of the minimax Algorithm:


The main drawback of the minimax algorithm is that it gets really slow for complex
games such as Chess, go, etc. This type of games has a huge branching factor,
and the player has lots of choices to decide. This limitation of the minimax
algorithm can be improved from alpha-beta pruning which we have discussed in
the next topic. Artificial Intelligence
Alpha-Beta Pruning
• Alpha-beta pruning is a modified version of the minimax algorithm. It is an
optimization technique for the minimax algorithm.

• As we have seen in the minimax search algorithm that the number of game
states it has to examine are exponential in depth of the tree. Since we cannot
eliminate the exponent, but we can cut it to half. Hence there is a technique by
which without checking each node of the game tree we can compute the correct
minimax decision, and this technique is called pruning. This involves two
threshold parameter Alpha and beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Algorithm.

• Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not
only prune the tree leaves but also entire sub-tree.

Artificial Intelligence
Alpha-Beta Pruning
The two-parameter can be defined as:

• Alpha: The best (highest-value) choice we have found so far at any point along
the path of Maximizer. The initial value of alpha is -∞.
• Beta: The best (lowest-value) choice we have found so far at any point along
the path of Minimizer. The initial value of beta is +∞.

• The Alpha-beta pruning to a standard minimax algorithm returns the same move
as the standard algorithm does, but it removes all the nodes which are not really
affecting the final decision but making algorithm slow. Hence by pruning these
nodes, it makes the algorithm fast.

Artificial Intelligence
Alpha-Beta Pruning
Condition for Alpha-beta pruning:
The main condition which required for alpha-beta pruning is:
α>=β

Key points about alpha-beta pruning:


•The Max player will only update the value of alpha.
•The Min player will only update the value of beta.
•While backtracking the tree, the node values will be passed to upper nodes
instead of values of alpha and beta.
•We will only pass the alpha, beta values to the child nodes.

Artificial Intelligence
Pseudo-code for Alpha-beta Pruning:
function minimax(node, depth, alpha, beta, maximizingPlayer) is
if depth ==0 or node is a terminal node then
return static evaluation of node

if MaximizingPlayer then // for Maximizer Player


maxEva= -infinity
for each child of node do
eva= minimax(child, depth-1, alpha, beta, False)
maxEva= max(maxEva, eva)
alpha= max(alpha, maxEva)
if beta<=alpha
break
return maxEva

else // for Minimizer player


minEva= +infinity
for each child of node do
eva= minimax(child, depth-1, alpha, beta, true)
minEva= min(minEva, eva)
beta= min(beta, eva)
if beta<=alpha
break
return minEva
Artificial Intelligence
Working of Alpha-Beta Pruning:
Let's take an example of two-player search tree to understand the working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞,
these value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B
passes the same value to its child D.

Artificial Intelligence
Working of Alpha-Beta Pruning:
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D
and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a
turn of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min
(∞, 3) = 3, hence at node B now α= -∞, and β= 3.

Artificial Intelligence
Working of Alpha-Beta Pruning:
In the next step, algorithm traverse the next successor of Node B which is node E, and
the values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current
value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β=
3, where α>=β, so the right successor of E will be pruned, and algorithm will not traverse
it, and the value at node E will be 5.

Artificial Intelligence
Working of Alpha-Beta Pruning:
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node
A, the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3,
and β= +∞, these two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α
remains 3, but the node value of F will become 1.

Artificial Intelligence
Working of Alpha-Beta Pruning:
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value
of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1,
and again it satisfies the condition α>=β, so the next child of C which is G will be pruned,
and the algorithm will not compute the entire sub-tree G.

Artificial Intelligence
Working of Alpha-Beta Pruning:
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3.
Following is the final game tree which is the showing the nodes which are computed and
nodes which has never computed. Hence the optimal value for the maximizer is 3 for this
example.

Artificial Intelligence
Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which each
node is examined. Move order is an important aspect of alpha-beta pruning.
It can be of two types:
•Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the
leaves of the tree, and works exactly as minimax algorithm. In this case, it also consumes
more time because of alpha-beta factors, such a move of pruning is called worst ordering. In
this case, the best move occurs on the right side of the tree. The time complexity for such
an order is O(bm).

•Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning
happens in the tree, and best moves occur at the left side of the tree. We apply DFS hence
it first search left of the tree and go deep twice as minimax algorithm in the same amount of
time. Complexity in ideal
Artificial ordering
Intelligence is O(bm/2).
Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta pruning:

•Occur the best move from the shallowest node.

•Order the nodes in the tree such that the best nodes are checked first.

•Use domain knowledge while finding the best move. Ex: for Chess, try order: captures first,

then threats, then forward moves, backward moves.

•We can bookkeep the states, as there is a possibility that states may repeat.

Artificial Intelligence
Constraint Satisfaction Problems (CSP) in Artificial Intelligence

• Finding a solution that meets a set of constraints is the goal of constraint

satisfaction problems (CSPs)

• Finding values for a group of variables that fulfill a set of restrictions or rules is the

aim of constraint satisfaction problems.

• For tasks including resource allocation, planning, scheduling, and decision-making,

CSPs are frequently employed in AI.

Artificial Intelligence
Define/Components of Constraint Satisfaction Problems (CSP)

Variables: Variables in a CSP are the objects that must have values assigned to them in order to
satisfy a particular set of constraints. Boolean, integer, and categorical variables are just a few
examples of the various types of variables, for instance, could stand in for the many puzzle
cells that need to be filled with numbers in a sudoku puzzle. For example, in a scheduling
problem, variables might represent time slots or tasks.

Domains: The range of potential values that a variable can have is represented by domains.
Depending on the issue, a domain may be finite or limitless. For instance, in Sudoku, the set of
numbers from 1 to 9 can serve as the domain of a variable representing a problem cell. For
instance, in scheduling, the domain of a time slot variable might be a list of available
times.
Artificial Intelligence
Components of Constraint Satisfaction Problems (CSP)

Constraints: Constraints are the rules or conditions that specify relationships between variables.
Constraints in a CSP define the ranges of possible values for variables. Unary constraints, binary
constraints, and higher-order constraints are only a few examples of the various sorts of
constraints. For instance, in a sudoku problem, the restrictions might be that each row,
column, and 3×3 box can only have one instance of each number from 1 to 9.

Artificial Intelligence
Constraint Satisfaction Problems (CSP) representation:
•The finite set of variables V1, V2, V3 ……………..Vn.
•Non-empty domain for every single variable D1, D2, D3 …………..Dn.
•The finite set of constraints C1, C2 …….…, Cm.
• where each constraint Ci restricts the possible values for variables,
• e.g., V1 ≠ V2
• Each constraint Ci is a pair <scope, relation>
• Example: <(V1, V2), V1 not equal to V2>
• Scope = set of variables that participate in constraint.
• Relation = list of valid variable value combinations.

Artificial Intelligence
Real-World Examples of CSPs
•Sudoku Puzzles: In Sudoku, the variables are the empty cells, the domains are
numbers from 1 to 9, and the constraints ensure that no number is repeated in a
row, column, or 3x3 sub grid.
•Scheduling Problems: In university course scheduling, variables might
represent classes, domains represent time slots, and constraints ensure that
classes with overlapping students or instructors cannot be scheduled
simultaneously.
•Map Coloring: In the map coloring problem, variables represent regions or
countries, domains represent available colors, and constraints ensure that
adjacent regions must have different colors.
Artificial Intelligence
Artificial Intelligence
Representation of CSPs
. Variables as Placeholders:
Variables in CSPs act as placeholders for problem components that need to be
assigned values. They represent the entities or attributes of the problem under
consideration. For example:
•In a Sudoku puzzle, variables represent the empty cells that need numbers.
•In job scheduling, variables might represent tasks to be scheduled.
•In map coloring, variables correspond to regions or countries that need to be
colored.

Artificial Intelligence
Representation of CSPs
2. Domains:
Each variable in a CSP is associated with a domain, which defines the set of
values that the variable can take. Domains are a critical part of the CSP
representation, as they restrict the possible assignments of values to variables.
For instance:
•In Sudoku, the domain for each empty cell is the numbers from 1 to 9.
•In scheduling, the domain for a task might be the available time slots.
•In map coloring, the domain could be a list of available colors.
Domains ensure that variable assignments remain within the specified range of
values.
Artificial Intelligence
Representation of CSPs
3. Constraints:
Constraints are typically represented in the form of logical expressions,
equations, or functions. For example:
•In Sudoku, constraints ensure that no two numbers are repeated in the same
row, column, or subgrid.
•In scheduling, constraints might involve ensuring that two tasks are not
scheduled at the same time.
•In map coloring, constraints require that adjacent regions have different colors.
Constraint specification is a crucial part of problem modeling, as it defines the
rules that the variables must follow.

Artificial Intelligence
Constraint Propagation
• Constraint propagation is the process of using the constraints to reduce the
domain of possible values for each variable, and to infer new constraints
from the existing ones.
• For example, if you have a variable X that can take values from 1 to 10, and
a constraint that X must be even, then you can reduce the domain of X to 2,
4, 6, 8, and 10. Similarly, if you have a constraint that X + Y = 12, and you
know that X = 4, then you can infer that Y = 8.
• By applying constraint propagation iteratively, you can eliminate
inconsistent values and simplify the problem.

Artificial Intelligence
Advantages of Constraint Propagation
• One of the main advantages of constraint propagation is that it can reduce
the search space and prune branches that lead to dead ends.
• This can make the problem easier to solve and improve the performance of
your algorithm.
• For example, if you use constraint propagation to assign colors to a map,
you might find that some regions have only one possible color left, and you
can assign it without further exploration.
• Another advantage of constraint propagation is that it can reveal hidden
structures and symmetries in the problem, and help you find more elegant
and general solutions.
• For example, if you use constraint propagation to solve a Sudoku puzzle,
you might discover that some cells belong to a subset that can be solved
independently of the rest.

Artificial Intelligence
Challenges of Constraint Propagation

• One of the main challenges of constraint propagation is that it can be


computationally expensive and time-consuming, depending on the number
and complexity of the constraints.
• You might have to check and update the domains and constraints of many
variables, and perform logical inference and deduction. This can be
especially costly if the constraints are non-linear, global, or dynamic.
• Another challenge of constraint propagation is that it can be incomplete
and insufficient, meaning that it might not be able to reduce the domains or
infer new constraints enough to solve the problem.
• You might still have to use other techniques, such as backtracking, heuristic
search, or local search, to find a solution or prove that none exists.

Artificial Intelligence
Some examples of Constraint Propagation
• Map coloring is a great example: by reducing the domains of each region
based on the colors of its neighbors, you can assign colors to a map such
that no two adjacent regions have the same color.
• Similarly, constraint propagation can be used to solve Sudoku puzzles and
N-queens games.
• In Sudoku, you reduce the domains of each cell based on the values of its
row, column, and box.
• Finally, you can use constraint propagation to place N queens on an N x N
chessboard such that no two queens attack each other by reducing the
domains of each row and column based on the diagonal constraints.

Artificial Intelligence
How to use constraint propagation in algorithm design
• To use constraint propagation in algorithm design, you need to follow some
steps.
• First, you need to formulate your problem as a CSP, by identifying the
variables, the domains, and the constraints.
• Second, you need to choose a suitable algorithm for applying constraint
propagation, such as arc consistency, path consistency, or k-consistency.
• Third, you need to implement the algorithm in your preferred programming
language, using data structures such as queues, stacks, or graphs.
• Fourth, you need to test and evaluate your algorithm on different instances
of the problem, and compare it with other approaches.

Artificial Intelligence
Constraint propagation: Inference in CSPs
• A number of inference techniques use the constraints to infer which
variable/value pairs are consistent and which are not. These include node,
arc, path, and k-consistent.

• Constraint propagation: Using the constraints to reduce the number of


legal values for a variable, which in turn can reduce the legal values for
another variable, and so on.

• Local consistency: If we treat each variable as a node in a graph and


each binary constraint as an edge/arc, then the process of enforcing local
consistency in each part of the graph causes inconsistent values to be
eliminated throughout the graph.

Artificial Intelligence
There are different types of local consistency:
Node consistency

• A single variable (a node in the CSP network) is node-consistent if all the


values in the variable’s domain satisfy the variable’s unary constraint.

• We say that a network is node-consistent if every variable in the network is


node-consistent.

• For example: in the variant of the Australia map-coloring problem where


South Australians dislike green, the variable SA starts with domain {red,
green, blue}, we can make it node consistent by eliminating green, leaving
SA with the reduced domain {red, blue}.

Artificial Intelligence
There are different types of local consistency:
Arc consistency

• A variable in a CSP is arc-consistent if every value in its domain satisfies


the variable’s binary constraints.
• Xi is arc-consistent with respect to another variable Xj if for every value in
the current domain Di there is some value in the domain Dj that satisfies
the binary constraint on the arc (Xi, Xj).
• A network is arc-consistent if every variable is arc-consistent with every
other variable.
• Arc consistency tightens down the domains (unary constraint) using the
arcs (binary constraints).

Artificial Intelligence
AC-3 algorithm:

Artificial Intelligence
AC-3 algorithm:

The complexity of AC-3:

Assume a CSP with n variables, each with domain size at most d,


and with c binary constraints (arcs). Each arc (Xk, Xi) can be
inserted in the queue only d times because Xi has at most d
values to delete. Checking consistency of an arc can be done in
O(d^2) time, total worst-case time is O(cd^3).

Artificial Intelligence
Path consistency

Path consistency:

• A two-variable set {Xi, Xj} is path-consistent with respect to


a third variable Xm if, for every assignment {Xi = a, Xj = b}
consistent with the constraint on {Xi, Xj}, there is an
assignment to Xm that satisfies the constraints on {Xi, Xm}
and {Xm, Xj}.

• Path consistency tightens the binary constraints by using


implicit constraints that are inferred by looking at triples of
variables.

Artificial Intelligence
K-consistency
K-consistency: A CSP is k-consistent if, for any set of k-1 variables and for any
consistent assignment to those variables, a consistent value can always be
assigned to any kth variable.

1-consistency = node consistency; 2-consistency = arc consistency; 3-consistency


= path consistency.

A CSP is strongly k-consistent if it is k-consistent and is also (k - 1)-consistent, (k


– 2)-consistent, … all the way down to 1-consistent.

A CSP with n nodes and make it strongly n-consistent, we are guaranteed to find a
solution in time O(n2d). But algorithm for establishing n-consistent must take time
exponential in n in the worse case, also requires space that is exponential in n.

Artificial Intelligence
Global constraints
A global constraint is one involving an arbitrary number of variables (but not
necessarily all variables). Global constraints can be handled by special-
purpose algorithms that are more efficient than general-purpose methods.

1) inconsistency detection for Alldiff constraints


A simple algorithm: First remove any variable in the constraint that has a
singleton domain, and delete that variable’s value from the domains of the
remaining variables. Repeat as long as there are singleton variables. If at
any point an empty domain is produced or there are more variables than
domain values left, then an inconsistency has been detected.
A simple consistency procedure for a higher-order constraint is sometimes
more effective than applying arc consistency to an equivalent set of binary
constrains.

Artificial Intelligence
Global constraints
2) inconsistency detection for resource constraint (the atmost
constraint)
We can detect an inconsistency simply by checking the sum of the minimum
of the current domains;
e.g.
Atmost(10, P1, P2, P3, P4): no more than 10 personnel are assigned in total.
If each variable has the domain {3, 4, 5, 6}, the Atmost constraint cannot be
satisfied.
We can enforce consistency by deleting the maximum value of any domain
if it is not consistent with the minimum values of the other domains.
e.g. If each variable in the example has the domain {2, 3, 4, 5, 6}, the values
5 and 6 can be deleted from each domain.

Artificial Intelligence
Global constraints
3) inconsistency detection for bounds consistent
For large resource-limited problems with integer values, domains are
represented by upper and lower bounds and are managed by bounds
propagation.
e.g. suppose there are two flights F1 and F2 in an airline-scheduling
problem, for which the planes have capacities 165 and 385, respectively.
The initial domains for the numbers of passengers on each flight are
D1 = [0, 165] and D2 = [0, 385].

Now suppose we have the additional constraint that the two flight together
must carry 420 people: F1 + F2 = 420. Propagating bounds constraints, we
reduce the domains to D1 = [35, 165] and D2 = [255, 385].
A CSP is bounds consistent if for every variable X, and for both the lower-
bound and upper-bound values of X, there exists some value of Y that
satisfies the constraint between X and Y for every variable Y.
Artificial Intelligence
Solving Constraint Satisfaction Problems in Artificial
Intelligence
1. Backtracking Search for CSP in Artificial Intelligence:
Backtracking search, a form of depth-first search, is commonly used for solving
CSPs. Inference can be interwoven with search.

Commutativity: CSPs are all commutative. A problem is commutative if the


order of application of any given set of actions has no effect on the outcome.

Backtracking search: A depth-first search that chooses values for one variable
at a time and backtracks when a variable has no legal values left to assign.

Backtracking algorithm repeatedly chooses an unassigned variable, and


then tries all values in the domain of that variable in turn, trying to find a
solution. If an inconsistency is detected, then BACKTRACK returns failure,
causing the previous call to try another value.
Artificial Intelligence
Solving Constraint Satisfaction Problems in Artificial
Intelligence
1. Backtracking Search for CSP in Artificial Intelligence:

There is no need to supply BACKTRACKING-SEARCH with a domain-specific


initial state, action function, transition model, or goal test.

BACKTRACKING-SEARCH keeps only a single representation of a state and


alters that representation rather than creating a new ones.

Artificial Intelligence
Artificial Intelligence
To solve CSPs efficiently without domain-specific knowledge, address following
questions:

1)function SELECT-UNASSIGNED-VARIABLE: which variable should be


assigned next?

function ORDER-DOMAIN-VALUES: in what order should its values be tried?

2)function INFERENCE: what inferences should be performed at each step in


the search?

3)When the search arrives at an assignment that violates a constraint, can the
search avoid repeating this failure?

Artificial Intelligence
1. Variable and value ordering

SELECT-UNASSIGNED-VARIABLE

Variable selection—fail-first

Minimum-remaining-values (MRV) heuristic: The idea of choosing the variable


with the fewest “legal” value. A.k.a. “most constrained variable” or “fail-first”
heuristic, it picks a variable that is most likely to cause a failure soon thereby
pruning the search tree. If some variable X has no legal values left, the MRV
heuristic will select X and failure will be detected immediately—avoiding pointless
searches through other variables.

E.g. After the assignment for WA=red and NT=green, there is only one possible
value for SA, so it makes sense to assign SA=blue next rather than assigning Q.
Artificial Intelligence
[Powerful guide]
Degree heuristic: The degree heuristic attempts
to reduce the branching factor on future choices by
selecting the variable that is involved in the largest
number of constraints on other unassigned
variables. [useful tie-breaker]
e.g. SA is the variable with highest degree 5; the
other variables have degree 2 or 3; T has degree
0.

Artificial Intelligence
ORDER-DOMAIN-VALUES
Value selection—fail-last
If we are trying to find all the solution to a problem (not just the first one),
then the ordering does not matter.
Least-constraining-value heuristic: prefers the value that rules out the
fewest choice for the neighboring variables in the constraint graph. (Try to
leave the maximum flexibility for subsequent variable assignments.)

e.g. We have generated the partial assignment with WA=red and


NT=green and that our next choice is for Q. Blue would be a bad choice
because it eliminates the last legal value left for Q’s neighbor, SA,
therefore prefers red to blue.

Artificial Intelligence
The minimum-remaining-values and degree heuristic are domain-

independent methods for deciding which variable to choose next in a

backtracking search. The least-constraining-value heuristic helps in

deciding which value to try first for a given variable.

Artificial Intelligence
2. Interleaving search and inference
INFERENCE

forward checking: [One of the simplest forms of inference.] Whenever a


variable X is assigned, the forward-checking process establishes arc
consistency for it: for each unassigned variable Y that is connected to X by a
constraint, delete from Y’s domain any value that is inconsistent with the value
chosen for X.
There is no reason to do forward checking if we have already done arc
consistency as a preprocessing step.

Artificial Intelligence
2. Interleaving search and inference

Artificial Intelligence
2. Interleaving search and inference

Advantage: For many problems the search will be more effective if we combine

the MRV heuristic with forward checking.

Disadvantage: Forward checking only makes the current variable arc-consistent,

but doesn’t look ahead and make all the other variables arc-consistent

Artificial Intelligence
2. Interleaving search and inference
MAC (Maintaining Arc Consistency) algorithm:
[More powerful than forward checking, detect inconsistency.]
After a variable Xi is assigned a value, the INFERENCE procedure calls AC-3,
but instead of a queue of all arcs in the CSP, we start with only the arcs(Xj, Xi) for
all Xj that are unassigned variables that are neighbors of Xi. From there, AC-3
does constraint propagation in the usual way, and if any variable has its domain
reduced to the empty set, the call to AC-3 fails and we know to backtrack
immediately.

Artificial Intelligence
3. Intelligent backtracking
Chronological backtracking:

The BACKGRACKING-SEARCH in Fig 6.5. When a branch of the search


fails, back up to the preceding variable and try a different value for it. (The
most recent decision point is revisited.)

Artificial Intelligence
3. Intelligent backtracking

e.g.

Suppose we have generated the partial


assignment {Q=red, NSW=green,
V=blue, T=red}.

When we try the next variable SA, we


see every value violates a constraint.

We back up to T and try a new color, it


cannot resolve the problem.

Artificial Intelligence
Intelligent backtracking: Backtrack to a variable that was responsible for
making one of the possible values of the next variable (e.g. SA) impossible.

Conflict set for a variable: A set of assignments that are in conflict with
some value for that variable.

(e.g. The set {Q=red, NSW=green, V=blue} is the conflict set for SA.)

backjumping method: Backtracks to the most recent assignment in the


conflict set.

(e.g. backjumping would jump over T and try a new value for V.)

Artificial Intelligence
THANK YOU !!
Prof. U. M Rane

[email protected]

K.K.Wagh Institute of Engineering Education & Research, Nashik

You might also like