MARKOV CHAINS

MARKOV CHAINS

Assumptions regarding the joint distribution of X0, X1, . . . are necessary to obtain analytical results. One assumption that leads to analytical tractability is that the stochastic process is a Markov chain, which has the following key property:

A stochastic process {Xt} is said to have the Markovian property if P{Xt+1 = jX0 = k0, X1 = k1, . . . , Xt-1 = kt-1, Xt = i} = P{Xt+1 = jXt = i}, for t = 0, 1, . . . and every sequence i, j, k0, k1, . . . , kt-1.

In words, this Markovian property says that the conditional probability of any future “event,” given any past “events” and the present state Xt = i, is independent of the past events and depends only upon the present state.

A stochastic process {Xt} (t = 0, 1, . . .) is a Markov chain if it has the Markovian property.

The conditional probabilities P{Xt+1 = jXt = i} for a Markov chain are called (one- step) transition probabilities. If, for each i and j,

P{Xt+1 = jXt = i} = P{X1 = jX0 = i}, for all t = 1, 2, . . . ,

then the (one-step) transition probabilities are said to be stationary. Thus, having

stationary transition probabilities implies that the transition probabilities do not change

INTRODUCTION TO OPERATIONS RESEARCH-0707

Furthermore, because these probabilities do not change if information about the weather before today (day t) is also taken into account,

P{Xt+1 = 0⏐X0 = k0, X1 = k1, . . . , Xt-1 = kt-1, Xt = 0} = P{Xt+1 = 0⏐Xt = 0}

P{Xt+1 = 0⏐X0 = k0, X1 = k1, . . . , Xt-1 = kt-1, Xt = 1} = P{Xt+1 = 0⏐Xt = 1}

for t = 0, 1, . . . and every sequence k0, k1, . . . , kt-1. These equations also must hold if Xt+1 = 0 is replaced by Xt+1 = 1. (The reason is that states 0 and 1 are mutually exclusive and the only possible states, so the probabilities of the two states must sum to 1.) There- fore, the stochastic process has the Markovian property, so the process is a Markov chain.

Using the notation introduced in this section, the (one-step) transition probabilities are

INTRODUCTION TO OPERATIONS RESEARCH-0708

Formulating the Inventory Example as a Markov Chain

Returning to the inventory example developed in the preceding section, recall that Xt is the number of cameras in stock at the end of week t (before ordering any more), so Xt represents the state of the system at time t (the end of week t). Given that the current state is Xt = i, the expression at the end of Sec. 29.1 indicates that Xt+1 depends only on Dt+1 (the demand in week t + 1) and Xt. Since Xt+1 is independent of any past history of the inventory system prior to time t, the stochastic process {Xt} (t = 0, 1, . . .) has the Markovian property and so is a Markov chain.

Now consider how to obtain the (one-step) transition probabilities, i.e., the elements of the (one-step) transition matrix

INTRODUCTION TO OPERATIONS RESEARCH-0709

INTRODUCTION TO OPERATIONS RESEARCH-0710

The information given by this transition matrix can also be depicted graphically with the state transition diagram in Fig. 29.2. The four possible states for the number of cam- eras on hand at the end of a week are represented by the four nodes (circles) in the diagram. The arrows show the possible transitions from one state to another, or sometimes from a state back to itself, when the camera store goes from the end of one week to the end of the next week. The number next to each arrow gives the probability of that particular transition occurring next when the camera store is in the state at the base of the arrow.

Additional Examples of Markov Chains

A Stock Example. Consider the following model for the value of a stock. At the end of a given day, the price is recorded. If the stock has gone up, the probability that it will go up tomorrow is 0.7. If the stock has gone down, the probability that it will go up tomorrow is only 0.5. (For simplicity, we will count the stock staying the same as a decrease.) This is a Markov chain, where the possible states for each day are as follows:

State 0: The stock increased on this day. State 1: The stock decreased on this day.

The transition matrix that shows each probability of going from a particular state today to a particular state tomorrow is given by

INTRODUCTION TO OPERATIONS RESEARCH-0711

The form of the state transition diagram for this example is exactly the same as for the weather example shown in Fig. 29.1, so we will not repeat it here. The only difference is that the transition probabilities in the diagram are slightly different (0.7 replaces 0.8, 0.3 replaces 0.2, and 0.5 replaces both 0.6 and 0.4 in Fig. 29.1).

A Second Stock Example. Suppose now that the stock market model is changed so that the stock’s going up tomorrow depends upon whether it increased today and yesterday. In particular, if the stock has increased for the past two days, it will increase tomorrow with probability 0.9. If the stock increased today but decreased yesterday, then it will increase tomorrow with probability 0.6. If the stock decreased today but increased yesterday, then it will increase tomorrow with probability 0.5. Finally, if the stock decreased for the past two days, then it will increase tomorrow with probability 0.3. If we define the state as representing whether the stock goes up or down today, the system is no longer a Markov chain. However, we can transform the system to a Markov chain by defining the states as follows:2

State 0: The stock increased both today and yesterday. State 1: The stock increased today and decreased yesterday. State 2: The stock decreased today and increased yesterday. State 3: The stock decreased both today and yesterday.

This leads to a four-state Markov chain with the following transition matrix:

INTRODUCTION TO OPERATIONS RESEARCH-0712

Figure 29.3 shows the state transition diagram for this example. An interesting feature of the example revealed by both this diagram and all the values of 0 in the transition matrix is that so many of the transitions from state i to state j are impossible in one step. In other words, pij = 0 for 8 of the 16 entries in the transition matrix. However, check out how it always is possible to go from any state i to any state j (including j = i) in two steps. The same holds

(n)

true for three steps, four steps, and so forth. Thus, pij

> 0 for n = 2, 3, . . . for all i and j.

A Gambling Example. Another example involves gambling. Suppose that a player has $1 and with each play of the game wins $1 with probability p > 0 or loses $1 with probability 1 - p > 0. The game ends when the player either accumulates $3 or goes broke. This game is a Markov chain with the states representing the player’s current holding of money, that is, 0, $1, $2, or $3, and with the transition matrix given by

INTRODUCTION TO OPERATIONS RESEARCH-0713

2We again are counting the stock staying the same as a decrease. This example demonstrates that Markov chains are able to incorporate arbitrary amounts of history, but at the cost of significantly increasing the number of states.

INTRODUCTION TO OPERATIONS RESEARCH-0714

The state transition diagram for this example is shown in Fig. 29.4. This diagram demonstrates that once the process enters either state 0 or state 3, it will stay in that state forever after, since p00 = 1 and p33 = 1. States 0 and 3 are examples of what are called an absorbing state (a state that is never left once the process enters that state). We will focus on analyzing absorbing states in Sec. 29.7.

Note that in both the inventory and gambling examples, the numeric labeling of the states that the process reaches coincides with the physical expression of the system—i.e., actual iventory levels and the player’s holding of money, respectively—whereas the numeric labeling of the states in the weather and stock examples has no physical significance.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM