MARKOV CHAINS:CLASSIFICATION OF STATES OF A MARKOV CHAIN

CLASSIFICATION OF STATES OF A MARKOV CHAIN

We have just seen near the end of the preceding section that the n-step transition probabilities for the inventory example converge to steady-state probabilities after a sufficient number of steps. However, this is not true for all Markov chains. The long-run properties of a Markov chain depend greatly on the characteristics of its states and transition matrix. To fur- ther describe the properties of Markov chains, it is necessary to present some concepts and definitions concerning these states.

p(n) is just the conditional probability of being in state j after n steps, starting in state i.) Thus, state j being accessible from state i means that it is possible for the system to en- ter state j eventually when it starts from state i. This is clearly true for the weather ex- ample (see Fig. 29.1) since pij > 0 for all i and j. In the inventory example (see Fig. 29.2), p(2) > 0 for all i and j, so every state is accessible from every other state. In general, a sufficient condition for all states to be accessible is that there exists a value of n for which p(n) > 0 for all i and j.

In the gambling example given at the end of Sec. 29.2 (see Fig. 29.4), state 2 is not accessible from state 3. This can be deduced from the context of the game (once the player reaches state 3, the player never leaves this state), which implies that p(n) = 0 for all n 2: 0. However, even though state 2 is not accessible from state 3, state 3 is accessible from state 2 since, for n = 1, the transition matrix given at the end of Sec. 29.2 indicates that p23 = p > 0.

If state j is accessible from state i and state i is accessible from state j, then states i

and j are said to communicate. In both the weather and inventory examples, all states communicate. In the gambling example, states 2 and 3 do not. (The same is true of states 1 and 3, states 1 and 0, and states 2 and 0.) In general,

1. Any state communicates with itself (because p(0) = P{X0 = iX0 = i} = 1).

2. If state i communicates with state j, then state j communicates with state i.

3. If state i communicates with state j and state j communicates with state k, then state I communicates with state k.

Properties 1 and 2 follow from the definition of states communicating, whereas property 3 follows from the Chapman-Kolmogorov equations.

As a result of these three properties of communication, the states may be parti- tioned into one or more separate classes such that those states that communicate with each other are in the same class. (A class may consist of a single state.) If there is only one class, i.e., all the states communicate, the Markov chain is said to be irreducible. In both the weather and inventory examples, the Markov chain is irreducible. In both of the stock examples in Sec. 29.2, the Markov chain also is irreducible. However, the gambling example contains three classes. Observe in Fig. 29.4 how state 0 forms a class, state 3 forms a class, and states 1 and 2 form a class.

Recurrent States and Transient States

It is often useful to talk about whether a process entering a state will ever return to this state. Here is one possibility.

A state is said to be a transient state if, upon entering this state, the process might never return to this state again. Therefore, state i is transient if and only if there exists a state j ( j * i) that is accessible from state i but not vice versa, that is, state i is not accessible from state j.

Thus, if state i is transient and the process visits this state, there is a positive probability (perhaps even a probability of 1) that the process will later move to state j and so will never return to state i. Consequently, a transient state will be visited only a finite number of times. To illustrate, consider the gambling example presented at the end of Sec. 29.2. Its state transition diagram shown in Fig. 29.4 indicates that both states 1 and 2 are tran- sient states since the process will leave these states sooner or later to enter either state 0 or state 3 and then will remain in that state forever.

When starting in state i, another possibility is that the process definitely will return to this state.

A state is said to be a recurrent state if, upon entering this state, the process definitely will return to this state again. Therefore, a state is recurrent if and only if it is not transient.

Since a recurrent state definitely will be revisited after each visit, it will be visited in- finitely often if the process continues forever. For example, all the states in the state transition diagrams shown in Figs. 29.1, 29.2, and 29.3 are recurrent states because the process always will return to each of these states. Even for the gambling example, states 0 and 3 are recurrent states because the process will keep returning immediately to one of these states forever once the process enters that state. Note in Fig. 29.4 how the process eventually will enter either state 0 or state 3 and then will never leave that state again.

If the process enters a certain state and then stays in this state at the next step, this is considered a return to this state. Hence, the following kind of state is a special type of recurrent state.

A state is said to be an absorbing state if, upon entering this state, the process never will leave this state again. Therefore, state i is an absorbing state if and only if pii = 1.

As just noted, both states 0 and 3 for the gambling example fit this definition, so they both are absorbing states as well as a special type of recurrent state. We will discuss absorbing states further in Sec. 29.7.

Recurrence is a class property. That is, all states in a class are either recurrent or transient. Furthermore, in a finite-state Markov chain, not all states can be tran- sient. Therefore, all states in an irreducible finite-state Markov chain are recurrent. Indeed, one can identify an irreducible finite-state Markov chain (and therefore con- clude that all states are recurrent) by showing that all states of the process commu- nicate. It has already been pointed out that a sufficient condition for all states to be accessible (and therefore communicate with each other) is that there exists a value of n for which pi> 0 for all i and j. Thus, all states in the inventory example (see Fig.

(2) 29.2) are recurrent, since piis positive for all i and j. Similarly, both the weather example and the first stock example contain only recurrent states, since pij is positive for all i and j. By calculating pifor all i and j in the second stock example

(2) in Sec. 29.2 (see Fig. 29.3), it follows that all states are recurrent since pij

all i and j. > 0 for As another example, suppose that a Markov chain has the following transition matrix:

INTRODUCTION TO OPERATIONS RESEARCH-0720

Note that state 2 is an absorbing state (and hence a recurrent state) because if the process enters state 2 (row 3 of the matrix), it will never leave. State 3 is a transient state be- cause if the process is in state 3, there is a positive probability that it will never return. The probability is -- that the process will go from state 3 to state 2 on the first step. Once the process is in state 2, it remains in state 2. State 4 also is a transient state because if the process starts in state 4, it immediately leaves and can never return. States 0 and 1 are recurrent states. To see this, observe from P that if the process starts in either of these states, it can never leave these two states. Furthermore, whenever the process moves from one of these states to the other one, it always will return to the original state eventually.

Periodicity Properties

Another useful property of Markov chains is periodicities. The period of state i is defined (n) to be the integer t (t > 1) such that pii = 0 for all values of n other than t, 2t, 3t, . . . and t is the smallest integer with this property. In the gambling example (end of Section 29.2), starting in state 1, it is possible for the process to enter state 1 only at times 2, 4, . . . , so state 1 has period 2. The reason is that the player can break even (be neither winning nor losing) only at times 2, 4, . . . , which can be verified by calculating p(n) for all n and not- ing that p(n) = 0 for n odd. You also can see in Fig. 29.4 that the process always takes two steps to return to state 1 until the process gets absorbed in either state 0 or state 3. (The same conclusion also applies to state 2.) If there are two consecutive numbers s and s + 1 such that the process can be in state I at times s and s + 1, the state is said to have period 1 and is called an aperiodic state.

Just as recurrence is a class property, it can be shown that periodicity is a class property. That is, if state i in a class has period t, then all states in that class have period t. In the gambling example, state 2 also has period 2 because it is in the same class as state 1 and we noted above that state 1 has period 2.

It is possible for a Markov chain to have both a recurrent class of states and a transient class of states where the two classes have different periods greater than 1.

In a finite-state Markov chain, recurrent states that are aperiodic are called ergodic states. A Markov chain is said to be ergodic if all its states are ergodic states. You will see next that a key long-run property of a Markov chain that is both irreducible and ergodic is that its n-step transition probabilities will converge to steady-state probabilities as n grows large.

Comments

Popular posts from this blog

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM