Be (CS) Spring 2022 Semester: CS-438: Computer Systems Modeling
Be (CS) Spring 2022 Semester: CS-438: Computer Systems Modeling
Be (CS) Spring 2022 Semester: CS-438: Computer Systems Modeling
▪ Thus, state j being accessible from state i means ▪ In the gambling example, state 2 is not accessible from
that it is possible for the system to enter state j state 3.
eventually when it starts from state i.
▪ In the machine example (lec 3), pij(2) > 0 for all i
and j, so every state is accessible from every
other state.
▪ This can be deduced from the context of the game (once
the player reaches state 3, the player never leaves this
state), which implies that p32(n) = 0 for all n ≥ 0.
▪ In general, a sufficient condition for all states to
be accessible is that there exists a value of n for ▪ However, even though state 2 is not accessible from state
3, state 3 is accessible from state 2 since, for n=1, the
which pij(n) > 0 for all i and j.
transition matrix indicates that p23 = p > 0.
3 4
CSM (CS-438) 1
CLASSIFICATION OF STATES OF A MARKOV CHAIN
CLASSIFICATION OF STATES OF A MARKOV CHAIN ▪ In general,
1. Any state communicates with itself (because pii(0) = P{Xo
▪ If state j is accessible from state i and state i is = i|Xo = i} = 1).
accessible from state j, 2. If state i communicates with state j, then state j
o then states i and j are said to communicate. communicates with state i.
▪ In the machine example, all states communicate. 3. If state i communicates with state j and state j
communicates with state k, then state i communicates
▪ In the gambling example, states 2 and 3 do not. with state k.
▪ Properties 1 and 2 follow from the definition of states
communicating,
o whereas property 3 follows from the Chapman-
Kolmogorov equations.
5 6
CSM (CS-438) 2
Recurrent States and Transient States Recurrent States and Transient States
▪ Thus, if state i is transient and the process visits ▪ Since a recurrent state definitely will be revisited after
this state, there is a positive probability (even 1) each visit, it will be visited infinitely if the process
that the process will later move to state j and so continues forever.
will never return to state i. ▪ Return to a state: If the process enters a certain state
and then stays in this state at the next step.
▪ Consequently, a transient state will be visited
only a finite number of times. ▪ A state is said to be an absorbing if, upon entering
this state, the process never will leave this state again.
▪ A state is said to be a recurrent state if, upon o A special type of recurrent state.
entering this state, the process definitely will
return to this state again.
▪ Therefore, a state is recurrent if and only if it is
not transient. 9 10
Lecture 23
Recurrent States and Transient States Recurrent States and Transient States
CSM (CS-438) 3
Recurrent States and Transient States Periodicity Properties
▪ State 4 also is a transient state because if the process starts ▪ Another useful property of Markov chains is periodicities.
in state 4, it immediately leaves and can never return. ▪ The period of state i is defined to be the integer t (t > 1)
such that pii(n) = 0 for all values of n other than t, 2t, 3t, . . .
and t is the largest integer with this property.
▪ In the gambling example, starting in state 1, it is possible
for the process to enter state 1 only at times 2, 4, … , so
state 1 has period 2.
1→2→1→2→1→2→3
▪ States 0 and 1 are recurrent states. ▪ Reason: player can break even (neither winning or losing)
▪ To see this, observe from P that if the process starts in only at times 2, 4, …, which can be verified by calculating
either of these states, it can never leave these two states. p11(n) for all n and noting that p11(n) = 0 for n odd.
▪ Furthermore, whenever the process moves from one of ▪ If there are two consecutive numbers s and s+1 such that
these states to the other one, it always will return to the the process can be in state i at times s and s+1, the state is
original state eventually. 13 said to have period 1 and is called an aperiodic state. 14
Lecture 24
Periodicity Properties
Long-Run Properties of Markov Chains
▪ Just as recurrence is a class property, it can be shown ▪ The long-run behavior of finite-state Markov chains as
that periodicity is a class property. reflected by the steady-state probabilities shows that
o there is a limiting probability that the system will be in
▪ That is, if state i in a class has period t, the all states in each state j after a large number of transitions, and
that class have period t.
o this probability is independent of probability
▪ In the gambling example, state 2 also has period 2 distribution initial state.
because it is in the same class as state 1 and we noted ▪ These properties are summarized below.
above that state 1 has period 2.
▪ In a finite-state Markov chain, recurrent states that are
aperiodic are called ergodic states.
▪ A Markov chain is said to be ergodic if all its states are
ergodic states.
15 16
CSM (CS-438) 4
Long-Run Properties of Markov Chains Long-Run Properties of Markov Chains
(𝒏)
▪ It is important to note that the steady-state probability
▪ For any irreducible ergodic Markov chain, 𝒍𝒊𝒎 𝒑𝒊𝒋 does not imply that the process settles down into one
𝒏 →∞
exists and is independent of i. state.
▪ On the contrary,
o the process continues to make transitions from state
to state, and
o at any step n the transition probability from state i
to state j is still pij.
▪ The j are called the steady-state probabilities of the
Markov chain.
17
18
▪ Let M=2
19 20
CSM (CS-438) 5
Long-Run Properties of Markov Chains Application to the Weather Example
21 22
Expected Average Cost per Unit Time Expected Average Cost per Unit Time
▪ Suppose that a cost (or other penalty function) C(Xt) is ➢ Application to the Weather Example
incurred when the process is in state Xt at time t, for t = ▪ To illustrate, consider the weather example again.
0, 1, 2, … . ▪ Suppose the daily cost of cleaning the town of
Centerville is $60 (per sq km) if it is dry and $100 (per
▪ Note that C(Xt) is a random variable that takes on any sq km) if is raining.
one of the values C(0), C(1), …, C(M) and that the
▪ The cost is also shown as follows:-
function C(.) is independent of t.
$60 𝑖𝑓 𝑋𝑡 = 0
𝐶 𝑋𝑡 = ቊ
$100 𝑖𝑓 𝑋𝑡 = 1
▪ The log-run expected average daily cost can then be
obtained from the preceding equation, i.e.,
E[C(Xt)] = 0.75 (60) + 0.25 (100) = 45 + 25 = $70/day
23 24
CSM (CS-438) 6