Posts

Showing posts from 2015

MARKOV CHAINS:CONTINUOUS TIME MARKOV CHAINS

Image
CONTINUOU S TIME MARKOV CHAINS In all the previous sections, we assumed that the time parameter t was discrete (that is, t = 0, 1, 2, . . .). Such an assumption is suitable for many problems, but there are certain cases (such as for some queueing models considered in Chap. 17) where a continuous time parameter (call it t ') is required, because the evolution of the process is being observed continuousl y over time. The definition of a Markov chain given in Sec. 29.2 also extends to such continuous processes. This section focuses on describing these “continuous time Markov chains” and their properties. Steady-State Probabilities Just as the transition probabilities for a discrete time Markov chain satisfy the Chapman- Kolmogorov equations, the continuous time transition probability function also satisfies these equations. Therefore, for any states i and j and nonnegative numbers t and s (0 < s < t ), The steady-state equation for state j has an intuitive interp

MARKOV CHAINS:ABSORBING STATES

Image
ABSORBING STATES It was pointed out in Sec. 29.4 that a state k is called an absorbing state if p k k = 1, so that once the chain visits k it remains there forever. If k is an absorbing state, and the process starts in state i , the probability of ever going to state k is called the probability of absorption into state k, given that the system started in state i . This probability is de- noted by f i k . When there are two or more absorbing states in a Markov chain, and it is evident that the process will be absorbed into one of these states, it is desirable to find these probabilities of absorption. These probabilities can be obtained by solving a system of linear equations that considers all the possibilities for the first transition and then, given the first transition, considers the conditional probability of absorption into state k . In particular, if the state k is an absorbing state, then the set of absorption probabilities f i k satisfies the system of equations Absorpt

MARKOV CHAINS:FIRST PASSAGE TIMES

Image
FIRST PASSAGE TIMES Section 29.3 dealt with finding n -step transition probabilities from state i to state j . It is often desirable to also make probability statements about the number of transitions made by the process in going from state i to state j for the first time. This length of time is called the first passage time in going from state i to state j. When j = i , this first passage time is just the number of transitions until the process returns to the initial state i . In this case, the first passage time is called the recurrence time for state i. To illustrate these definitions, reconsider the inventory example introduced in Sec. 29.1, where X t is the number of cameras on hand at the end of week t, where we start with X 0 = 3. Suppose that it turns out that In this case, the first passage time in going from state 3 to state 1 is 2 weeks, the first passage time in going from state 3 to state 0 is 3 weeks, and the recurrence time for state 3 is 4 weeks. In general, t

MARKOV CHAINS:LONG-RUN PROPERTIES OF MARKOV CHAINS

Image
LONG-RUN PROPERTIES OF MARKOV CHAINS Steady-State Probabilities While calculating the n -step transition probabilities for both the weather and inventory examples in Sec. 29.3, we noted an interesting feature of these matrices. If n is large enough ( n = 5 for the weather example and n = 8 for the inventory example), all the rows of the matrix have identical entries, so the probability that the system is in each state j no longer depends on the initial state of the system. In other words, there is a limiting prob- ability that the system will be in each state j after a large number of transitions, and this probability is independent of the initial state. These properties of the long-run behavior of finite-state Markov chains do, in fact, hold under relatively general conditions, as sum- marized below. Thus, the probability of finding the process in a transient state after a large number of transitions tends to zero. Expected Average Cost per Unit Time The preceding subsecti

MARKOV CHAINS:CLASSIFICATION OF STATES OF A MARKOV CHAIN

Image
CLASSIFICATION OF STATES OF A MARKOV CHAIN We have just seen near the end of the preceding section that the n -step transition probabilities for the inventory example converge to steady-state probabilities after a sufficient number of steps. However, this is not true for all Markov chains. The long-run properties of a Markov chain depend greatly on the characteristics of its states and transition matrix. To fur- ther describe the properties of Markov chains, it is necessary to present some concepts and definitions concerning these states. p ( n ) is just the conditional probability of being in state j after n steps, starting in state i .) Thus, state j being accessible from state i means that it is possible for the system to en- ter state j eventually when it starts from state i . This is clearly true for the weather ex- ample (see Fig. 29.1) since p i j > 0 for all i and j . In the inventory example (see Fig. 29.2), p (2) > 0 for all i and j, so every state is accessible fro

MARKOV CHAINS:CHAPMAN-KOLMOGOROV EQUATIONS

Image
CHAPMAN-KOLMOGOROV EQUATIONS Section 29.2 introduced the n -step transition probability p ( n ). The following Chapman- Kolmogorov equations provide a method for computing these n -step transition probabilities: 3These equations also hold in a trivial sense when m = 0 or m = n , but m = 1, 2, . . . , n - 1 are the only interesting cases. These equations point out that in going from state i to state j in n steps, the process Thus, if the weather is in state 0 (dry) on a particular day, the probability of being in state 0 two days later is 0.76 and the probability of being in state 1 (rain) then is 0.24. Similarly, if the weather is in state 1 now, the probability of being in state 0 two days later is 0.72 whereas the probability of being in state 1 then is 0.28. The probabilities of the state of the weather three, four, or five days into the future also can be read in the same way from the three-step, four-step, and five-step transition matrices calculated to three signifi

MARKOV CHAINS

Image
MARKOV CHAINS Assumptions regarding the joint distribution of X 0, X 1, . . . are necessary to obtain analytical results. One assumption that leads to analytical tractability is that the stochastic process is a Markov chain, which has the following key property: A stochastic process { X t } is said to have the Markovian property if P { X t +1 = j ⏐ X 0 = k 0, X 1 = k 1, . . . , X t -1 = k t -1, X t = i } = P { X t +1 = j ⏐ X t = i }, for t = 0, 1, . . . and every sequence i , j , k 0, k 1, . . . , k t -1. In words, this Markovian property says that the conditional probability of any future “event,” given any past “events” and the present state X t = i , is independent of the past events and depends only upon the present state. A stochastic process { X t } ( t = 0, 1, . . .) is a Markov chain if it has the Markovian property. The conditional probabilities P { X t +1 = j ⏐ X t = i } for a Markov chain are called (one- step) transition probabilities. If, for each i and j , P {

MARKOV CHAINS:STOCHASTIC PROCESSES

Image
STOCHASTIC PROCESSES A stochastic process is defined as an indexed collection of random variables { X t }, where the index t runs through a given set T . Often T is taken to be the set of non- negative integers, and X t represents a measurable characteristic of interest at time t. For example, X t might represent the inventory level of a particular product at the end of week t. Stochastic processes are of interest for describing the behavior of a system operating over some period of time. A stochastic process often has the following structure. The current status of the system can fall into any one of M + 1 mutually exclusive categories called states. For notational convenience, these states are labeled 0, 1, . . . , M . The random variable X t represents the state of the system at time t, so its only possible values are 0, 1, . . . , M . The system is observed at particular points of time, labeled t = 0, 1, 2, . . . . Thus, the stochastic process { X t } = { X 0, X 1, X 2, . . .}

SUMMARY OF EXAMPLES OF PERFORMING SIMULATIONS ON SPREADSHEETS WITH ANALYTIC SOLVER PLATFORM

SUMMARY Increasingly, spreadsheet software is being used to perform simulations. As illustrated in Secs. 20.1 and 20.4, the standard Excel package sometimes is sufficient to do this. In addition, some Excel add-ins now are available that greatly extend these capabilities. ASPE is an especially powerful add-in of this kind. When using ASPE, each input cell that has a random value is referred to as an uncer- tain variable cell. The procedure for defining an uncertain variable cell includes selecting one of 46 types of probability distributions from the Distributions menu to enter into the cell. When historical data are available, ASPE also has a procedure for identifying which continuous distribution fits the data best. An output cell that is used to forecast a measure of performance is called a r esults cell. Each trial of a simulation run generates a value in each results cell. When the simulation run is completed, ASPE provides the results in a variety of useful forms, including

EXAMPLES OF PERFORMING SIMULATIONS ON SPREADSHEETS WITH ANALYTIC SOLVER PLATFORM:DECISION MAKING WITH PARAMETER ANALYSIS REPORTS AND TREND CHARTS

Image
DECISION MAKING WITH PARAMETER ANALYSIS REPORTS AND TREND CHARTS Many simulation models include at least one decision variable. For example, the model formulated for both the bidding example in Sec. 28.1 and the overbooking example in Sec. 28.5 included a single decision variable, as listed below: Bidding example: OurBid (C25) in Fig. 28.2 Overbooking example: ReservationsToAccept (C13) in Fig. 28.16 In both of these cases, you have seen how well simulation with ASPE can e valuat e a particular value of the decision variable by providing a wealth of output for the results cell(s). However, in contrast to many OR techniques, this approach has not identified an optima l solution for the decision variable(s). Fortunately, ASPE provides a way to systematically perform multiple simulations by using parameter cells. This makes it easy to identify at least an approximation of an optimal solution for problems with only one or two decision variables. In this section, we describe this

EXAMPLES OF PERFORMING SIMULATIONS ON SPREADSHEETS WITH ANALYTIC SOLVER PLATFORM:CHOOSING THE RIGHT DISTRIBUTION

Image
CHOOSING THE RIGHT DISTRIBUTION As mentioned in Sec. 20.6, ASPE’s Distributions menu provides a wealth of choices. Any of 46 probability distributions can be selected as the one to be entered into any uncertain variable cell. In the preceding sections, we have illustrated the use of five of these distributions (the integer uniform, uniform, triangular, normal, and binomial distributions). How- ever, not much was said about why any particular distribution was chosen. In this section, we focus on the issue of how to choose the right distribution. We begin by surveying the characteristics of many of the 46 distributions and how these characteristics help to identify the best choice. We next describe a special feature of ASPE for creating one of the 7 available custom distributions when none of the other 39 choices in the Distributions menu will do. We then return to the example analyzed in Sec. 20.6 to illustrate another special feature of ASPE. When historical data are available, thi