Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

The Law of Total Probability and Bayes Rule | STAT 3401, Assignments of Probability and Statistics

Material Type: Assignment; Class: Introduction to Probability Theory I; Subject: Statistics; University: California State University-East Bay; Term: Winter 2005;

Typology: Assignments

Pre 2010

Uploaded on 08/19/2009

koofers-user-rct
koofers-user-rct 🇺🇸

10 documents

1 / 5

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 3401, Intro. Prob. Theory 18/26 Jaimie Kwon
1/24/2005
2.10 The law of total probability and Bayes’ rule
Definition 2.11. For some positive integer k, let the sets B1,B2,…,Bk be such that
1. S= B1B2Bk ,
2. BiBj= if ij
Then the collection of sets {B1,B2,…,Bk} is said to be a “partition” of S.
If A is any subset of S, and {B1,B2,…,Bk} is a partition of S, A can be “decomposed” as:
A=(AB1)(AB2) (ABk)}
* See the Venn Diagram
Theorem 2.8. (The law of total probability) Assume that {B
1
,B
2
,…,B
k
} is a partition of S such that
P(B
i
)>0 for i=1,…,k. Then for any event A
P(A)=Σ
i=1,…,k
P(A|B
i
)P(B
i
)
à Proof: apply the additive law
à Sometimes, it’s easier to calculate P(A|Bi) for a suitably chosen partition than to compute
P(A) directly.
Theorem 2.9. (Bayes’ Rule) Assume {B
1
,B
2
,…,B
k
} is a partition of S such that P(B
i
)>0 for
i=1,2,…,k. Then
=
=k
i
ii
jj
j
BPBAP
BPBAP
ABP
1
)()|(
)()|(
)|(
.
Example 2.23. (An electronic fuse) 5 production lines produce fuses at the same production
rate, with 2% defect rate except line 1 with 5% defect rate. A customer tested three fuses and
one of them failed. What is:
P(the lot was produced in line 1|the data)=?
P(the lot was produced in one of lines 2-4|the data)=?
HW. Some of the exercises 2.98~116
Keywords: the law of total probability; Bayes rule
pf3
pf4
pf5

Partial preview of the text

Download The Law of Total Probability and Bayes Rule | STAT 3401 and more Assignments Probability and Statistics in PDF only on Docsity!

2.10 The law of total probability and Bayes’ rule

♦ Definition 2.11. For some positive integer k, let the sets B 1 ,B 2 ,…,B (^) k be such that

  1. S= B 1 ∪B 2 ∪…∪Bk ,
  2. B i∩Bj =∅ if i≠j Then the collection of sets {B 1 ,B 2 ,…,B (^) k} is said to be a “partition” of S. ♦ If A is any subset of S, and {B 1 ,B 2 ,…,B k } is a partition of S, A can be “decomposed” as: A=(A∩B 1 )∪(A∩B 2 ) ∪…∪ (A∩Bk)}
  • See the Venn Diagram

♦ Theorem 2.8. (The law of total probability) Assume that {B 1 ,B 2 ,…,B k} is a partition of S such that

P(Bi )>0 for i=1,…,k. Then for any event A

P(A)=Σi=1,…,k P(A|B i )P(Bi )

à Proof: apply the additive law à Sometimes, it’s easier to calculate P(A|B i ) for a suitably chosen partition than to compute P(A) directly.

♦ Theorem 2.9. (Bayes’ Rule) Assume {B 1 ,B 2 ,…,B k } is a partition of S such that P(B i )>0 for

i=1,2,…,k. Then

= (^) k

i i i

j j j P A B P B

P A B PB P B A

1

( | ) ( )

( | ) ( ) ( | ).

♦ Example 2.23. (An electronic fuse) 5 production lines produce fuses at the same production rate, with 2% defect rate except line 1 with 5% defect rate. A customer tested three fuses and one of them failed. What is: P(the lot was produced in line 1|the data)=? P(the lot was produced in one of lines 2-4|the data)=?

♦ HW. Some of the exercises 2.98~ ♦ Keywords: the law of total probability; Bayes rule

2.11 Numerical events and random variables

♦ Events of major interest are “numerical events” ♦ Define a variable Y that is a function of the sample points in S ♦ {All sample points where Y=a} is the numerical event assigned to number a. ♦ The sample space S can be partitioned into mutually exclusive sets of points assigned to the same value of Y ♦ Definition 2.12. A “random variable” is a real-valued function for which the domain is a sample space. ♦ Convention: We let y denote an observed value of Y. Then P(Y=y)= ∑{P(Ei ): i such that E (^) i is assigned to y}. Formal definition comes later… ♦ Example 2.24. Tossing two coins. Y=# of heads. The sample points in S? Y(E i )=? Sample points corresponding to {Y=y}? What is P(Y=y) for each value of y?

2.12 Random sampling

♦ Population vs. sample (=observations of the values of random variables) ♦ Sampling with/without replacement affect probabilities of outcomes ♦ Design of experiment is the method of sampling ♦ Definition 2.13. In sampling n elements from a population with N elements, if the sampling is conducted in such a way that each of NCn samples has an equal probability of being selected, the sampling is said to be “random” and the result is said to be a “random sample” ♦ How to do random sampling? à Low-tech method (e.g. drawing tickets from a jar after shaking it) à The random number table (Table 12) à Use computer (In R, run sample(1:1000, 100)) ♦ Sometimes we don’t want a completely random sample

3.3 The expected value of a random variable or a function of a random variable

♦ Definition 3.4 For a discrete random variable Y with the probability function p(y), the “expected value” of Y, E(Y) is defined to be E(Y) = ∑yyp(y) ♦ If p(y) is an accurate characterization of the population frequency distribution, then E(Y)=μ, the population mean ♦ This definition is consistent with the definition of the mean of a set of measurements (Definition 1.1) ♦ What about the mean of Y 2? The mean of (Y-μ) 2?

♦ Theorem 3.2 Let Y be a discrete random variable with probability function p(y) and g(y) be a

real-valued function of Y. Then the expected value of g(Y) is given by

E[g(Y)] = ∑yg(y)p(y)

à Note this is not a definition à Proof: The trick is to define G=g(Y) that takes on values g1,…,gm and express P(G=g i )=p *(g i) in terms of p(y j) ♦ Definition 3.5 The variance of a random variable Y is defined to be V(Y) = E[(Y-μ) 2 ]. The “standard deviation” of Y is the positive square root of V(Y). ♦ If p(y) is an accurate characterization of the population frequency distribution, then V(Y) =σ^2 is the population variance and σ is the population SD. ♦ Example 3.2. Find the mean, variance and standard deviation of Y in the above example. ♦ In the following theorems, we assume Y is a discrete random variable with probability function p(y)

♦ Theorem 3.3 Let Y be a discrete random variable with probability function p(y). For any

constant c, E(c)=c.

♦ Theorem 3.4. Let Y be a discrete random variable with probability function p(y). For a function

g(Y) of Y and a constant c, E[cg(Y)] = cE[g(Y)].

♦ Theorem 3.5. Let Y be a discrete random variable with probability function p(y). For functions

g 1 (Y), g 2 (Y),…,g k (Y) of Y,

E[g 1 (Y)+g 2 (Y)+…+g k(Y)]=E[g 1 (Y)]+E[g 2 (Y)]+…+E[g k(Y)].

♦ Theorem 3.6. Let Y be a discrete random variable with probability function p(y). Then,

V(Y) = σ^2 =E[(Y-μ) 2 ] = E(Y 2 )-μ^2.

à This makes variance computation of example 3.2 easier.

♦ Example 3.4 The expected daily cost of two machines A and B.

♦ HW. Some of the exercises 3.10~ ♦ Keywords: E(Y); V(Y); E(g(Y))