PROBABILITY THEORY:LAW OF LARGE NUMBERS

LAW OF LARGE NUMBERS

Section 24.7 pointed out that the mean of a random sample tends to converge to the ex- pectation of the random variables as the sample size increases. In particular, suppose the random variable X, the demand for a product, may take on one of the possible values k = 0, 1, 2, . . . , 98, 99, each with PX(k) = 1/100 for all k. Then E(X) is easily seen to be 49.5. If a random sample of size n is taken, i.e., the demands are observed for n days, with each day’s demand being independent and identically distributed random variables, it was noted that the random variable X should take on a value close to 49.5 if n is large. This result can be stated precisely as the law of large numbers.

Law of Large Numbers

Let the random variables X1, X2, . . . , Xn be independent, identically distributed random variables (a random sample of size n), each having mean JL. Consider the random vari- able that is the sample mean X :

INTRODUCTION TO OPERATIONS RESEARCH-0592

X =

X1 + X2 + . . . + Xn .

n

Then for any constant e > 0,

lim PX - JL½ > e} = 0.

n®

The interpretation of the law of large numbers is that as the sample size increases, the proba- bility is “close” to 1 that X is “close” to JL. Assuming that the variance of each Xi is u2 < , this result is easily proved by using Chebyshev’s inequality (stated in Sec. 24.8). Since each Xi has mean JL and variance u2, X also has mean JL, but its variance is u2/n. Hence, apply- ing Chebyshev’s inequality to the random variable X, it is evident that

Cu Cu 1

P(JL - ::: X ::: JL + > 1- 2 .

Vn

This is equivalent to

Cu

Vn C

1

PX - JL½ > < 2 .

Vn C

Let Cu/Vn = e, so that C = eVn /u. Thus,

u2

PX - JL½ > e} < 2 ,

e n

so that

lim PX - JL½ > e} = 0,

n®

as was to be proved.

Comments

Popular posts from this blog

NETWORK OPTIMIZATION MODELS:THE MINIMUM SPANNING TREE PROBLEM

DUALITY THEORY:THE ESSENCE OF DUALITY THEORY

NETWORK OPTIMIZATION MODELS:THE SHORTEST-PATH PROBLEM