Be (CS) Spring 2022 Semester: CS-438: Computer Systems Modeling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Lecture 22

BE (CS) CLASSIFICATION OF STATES OF A MARKOV CHAIN


Spring 2022 Semester
▪ It is evident that the transition probabilities
associated with the states play an important role in
CS-438: Computer Systems Modeling the study of Markov chains.
▪ To further describe the properties of Markov chains,
it is necessary to present some concepts and
Lectures 22-24 definitions concerning these states.
Stochastic processes and Markov Chains ▪ State j is said to be accessible from state i if pij(n) > 0
for some n > 0.
Dr Syed Zaffar Qasim ▪ (Recall that pij(n) is just the conditional probability of
Assistant Professor (CIS) being in state j after n steps, starting in state i.)
1 2

CLASSIFICATION OF STATES OF A MARKOV CHAIN CLASSIFICATION OF STATES OF A MARKOV CHAIN

▪ Thus, state j being accessible from state i means ▪ In the gambling example, state 2 is not accessible from
that it is possible for the system to enter state j state 3.
eventually when it starts from state i.
▪ In the machine example (lec 3), pij(2) > 0 for all i
and j, so every state is accessible from every
other state.
▪ This can be deduced from the context of the game (once
the player reaches state 3, the player never leaves this
state), which implies that p32(n) = 0 for all n ≥ 0.
▪ In general, a sufficient condition for all states to
be accessible is that there exists a value of n for ▪ However, even though state 2 is not accessible from state
3, state 3 is accessible from state 2 since, for n=1, the
which pij(n) > 0 for all i and j.
transition matrix indicates that p23 = p > 0.
3 4

CSM (CS-438) 1
CLASSIFICATION OF STATES OF A MARKOV CHAIN
CLASSIFICATION OF STATES OF A MARKOV CHAIN ▪ In general,
1. Any state communicates with itself (because pii(0) = P{Xo
▪ If state j is accessible from state i and state i is = i|Xo = i} = 1).
accessible from state j, 2. If state i communicates with state j, then state j
o then states i and j are said to communicate. communicates with state i.

▪ In the machine example, all states communicate. 3. If state i communicates with state j and state j
communicates with state k, then state i communicates
▪ In the gambling example, states 2 and 3 do not. with state k.
▪ Properties 1 and 2 follow from the definition of states
communicating,
o whereas property 3 follows from the Chapman-
Kolmogorov equations.
5 6

Recurrent States and Transient States


CLASSIFICATION OF STATES OF A MARKOV CHAIN
▪ It is often useful to talk about whether a process
▪ As a result of these three properties of communication, entering a state will ever return to this state.
o the states may be partitioned into one or more
separate classes ▪ Here is one possibility.
o such that those states that communicate with each ▪ A state is said to be a transient state if, upon
other are in the same class.
entering this state, the process may never return
o A class may consist of a single state.
to this state again.
▪ If there is only one class, i.e., all the states communicate,
the Markov chain is said to be irreducible. ▪ Therefore, state i is transient if and only if there
▪ In the machine example, the Markov chain is irreducible. exists a state j ( j ≠ i) that is accessible from state
i but not vice versa, that is, state i is not
▪ The gambling example contains three classes.
accessible from state j.
o State 0 forms a class, state 3 forms a class, and states 1
and 2 form a class.
i j 8
7

CSM (CS-438) 2
Recurrent States and Transient States Recurrent States and Transient States

▪ Thus, if state i is transient and the process visits ▪ Since a recurrent state definitely will be revisited after
this state, there is a positive probability (even 1) each visit, it will be visited infinitely if the process
that the process will later move to state j and so continues forever.
will never return to state i. ▪ Return to a state: If the process enters a certain state
and then stays in this state at the next step.
▪ Consequently, a transient state will be visited
only a finite number of times. ▪ A state is said to be an absorbing if, upon entering
this state, the process never will leave this state again.
▪ A state is said to be a recurrent state if, upon o A special type of recurrent state.
entering this state, the process definitely will
return to this state again.
▪ Therefore, a state is recurrent if and only if it is
not transient. 9 10

Lecture 23
Recurrent States and Transient States Recurrent States and Transient States

▪ Therefore, state i is an absorbing state if and only if pii = 1.


▪ Recurrence is a class property.
▪ That is, all states in a class are either recurrent or
transient.
▪ Furthermore, in a finite-state Markov chain, not all states ▪ Note that state 2 is an absorbing state (and hence a
can be transient. recurrent state) because if the process enters state 2 (row 3
▪ Therefore, all states in an irreducible finite-state Markov of the matrix), it will never leave.
chain are recurrent. ▪ State 3 is a transient state because if the process is in state
▪ Thus, all states in the machine example are recurrent, 3, there is a positive probability that it will never return.
since pij(2) is positive for all i and j. ▪ The probability is 1/3 that the process will go from state 3
to state 2 on the first step.
11 ▪ Once the process is in state 2, it remains in state 2. 12

CSM (CS-438) 3
Recurrent States and Transient States Periodicity Properties
▪ State 4 also is a transient state because if the process starts ▪ Another useful property of Markov chains is periodicities.
in state 4, it immediately leaves and can never return. ▪ The period of state i is defined to be the integer t (t > 1)
such that pii(n) = 0 for all values of n other than t, 2t, 3t, . . .
and t is the largest integer with this property.
▪ In the gambling example, starting in state 1, it is possible
for the process to enter state 1 only at times 2, 4, … , so
state 1 has period 2.
1→2→1→2→1→2→3
▪ States 0 and 1 are recurrent states. ▪ Reason: player can break even (neither winning or losing)
▪ To see this, observe from P that if the process starts in only at times 2, 4, …, which can be verified by calculating
either of these states, it can never leave these two states. p11(n) for all n and noting that p11(n) = 0 for n odd.
▪ Furthermore, whenever the process moves from one of ▪ If there are two consecutive numbers s and s+1 such that
these states to the other one, it always will return to the the process can be in state i at times s and s+1, the state is
original state eventually. 13 said to have period 1 and is called an aperiodic state. 14

Lecture 24
Periodicity Properties
Long-Run Properties of Markov Chains
▪ Just as recurrence is a class property, it can be shown ▪ The long-run behavior of finite-state Markov chains as
that periodicity is a class property. reflected by the steady-state probabilities shows that
o there is a limiting probability that the system will be in
▪ That is, if state i in a class has period t, the all states in each state j after a large number of transitions, and
that class have period t.
o this probability is independent of probability
▪ In the gambling example, state 2 also has period 2 distribution initial state.
because it is in the same class as state 1 and we noted ▪ These properties are summarized below.
above that state 1 has period 2.
▪ In a finite-state Markov chain, recurrent states that are
aperiodic are called ergodic states.
▪ A Markov chain is said to be ergodic if all its states are
ergodic states.
15 16

CSM (CS-438) 4
Long-Run Properties of Markov Chains Long-Run Properties of Markov Chains

(𝒏)
▪ It is important to note that the steady-state probability
▪ For any irreducible ergodic Markov chain, 𝒍𝒊𝒎 𝒑𝒊𝒋 does not imply that the process settles down into one
𝒏 →∞
exists and is independent of i. state.
▪ On the contrary,
o the process continues to make transitions from state
to state, and
o at any step n the transition probability from state i
to state j is still pij.
▪ The j are called the steady-state probabilities of the
Markov chain.

17
18

Long-Run Properties of Markov Chains Long-Run Properties of Markov Chains

▪ Let M=2

19 20

CSM (CS-438) 5
Long-Run Properties of Markov Chains Application to the Weather Example

▪ Returning to the weather example (M=1), we see that the


steady-state equations can be expressed as

▪ Note that the steady-state equations consist of


M + 2 equations in M + 1 unknowns.
▪ Because it has a unique solution, at least one
equation must be redundant and can, therefore,
be deleted.
▪ It cannot be the equation D because j = 0 for all
j will satisfy the other M + 1 equations.

21 22

Expected Average Cost per Unit Time Expected Average Cost per Unit Time

▪ Suppose that a cost (or other penalty function) C(Xt) is ➢ Application to the Weather Example
incurred when the process is in state Xt at time t, for t = ▪ To illustrate, consider the weather example again.
0, 1, 2, … . ▪ Suppose the daily cost of cleaning the town of
Centerville is $60 (per sq km) if it is dry and $100 (per
▪ Note that C(Xt) is a random variable that takes on any sq km) if is raining.
one of the values C(0), C(1), …, C(M) and that the
▪ The cost is also shown as follows:-
function C(.) is independent of t.
$60 𝑖𝑓 𝑋𝑡 = 0
𝐶 𝑋𝑡 = ቊ
$100 𝑖𝑓 𝑋𝑡 = 1
▪ The log-run expected average daily cost can then be
obtained from the preceding equation, i.e.,
E[C(Xt)] = 0.75 (60) + 0.25 (100) = 45 + 25 = $70/day

23 24

CSM (CS-438) 6

You might also like