


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Here the summary cheat sheet of the main probability definitions and rules
Typology: Cheat Sheet
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Here I will summarise the results from probability required for this course. There will be a problem sheet partnered with this sheet. If you struggle with these results you should see me so we can discuss how you may familiarise yourself with this material. If this course is optional for you and you are unfamiliar with these results I strongly suggest you consider another course.
Let Ω be the space of all outcomes and A, B ⊆ Ω (i.e. Ω is all possible outcomes of a fair dice Ω = { 1 , 2 , 3 , 4 , 5 , 6 } , A and B are things like the odd numbers { 1 , 3 , 5 }). A, B are called ‘events’, like the event of an odd number being rolled. If outcomes are all equally likely (i.e. the roll of a fair dice) then we define the probability of set A to be
P (A) =
i.e. size of A divided by size of whole space. More generally P is said to be a probability distribution (or measure) if the following three axioms hold
i=
Ai
i=
P (Ai)
0.1.1 Complement
The probability of the complement of a set Ac^ (i.e. the stuff in Ω but not in A) is given by
P (Ac) = 1 − P (A)
0.1.2 Sum Rule
The sum rule is defined as
P (A ∪ B) = P (A) + P (B) − P (A ∩ B)
Typically we will consider the case where A and B are disjoint in which case this reduces to
P (A ∪ B) = P (A) + P (B)
NOTE: The course book (PRML) treats the sum rule only for the case where A and B are disjoint.
The probability of A occurring given that we observed B 6 = ∅ is given by
0.2.1 Independence
The sets A and B are independent if
P (A ∩ B) = P (A)P (B)
0.2.2 Product Rule
The product rule is defined as
P (A ∩ B) = P (A|B)P (B)
0.2.3 Bayes’ Rule
From the product rule one deduces Bayes’ Rule
We now move on to discussing probability distributions, expectations and variances of a random variable. We define X and Y to be random variables (r.v.). A random variable is a function that assigns to every outcome a real value. Its distribution, also called probability mass function (pmf), is defined as
p(x) = P ({X = x})
The right hand side is the probability of the event that X takes the value x, i.e. the total probability of all outcomes where this is true. Once we have the distribution p(x) we can essentially forget about the underlying space Ω as far as X is concerned. If we generalise the setup appropriately to infinite Ω (the technical details are the subject of what is known as measure theory but won’t concern us here), we can have random variables that can take a continuum of values, e.g. any real number. For these one defines a probability density function (pdf) via
p(x) dx = P ({x < X < x + dx})
which has to hold in the limit dx → 0. The upshot of this is that formulas for discrete r.v.s translate to formulas for continuous r.v.s just by replacing sums with integrals.
0.4.3 Conditional Expectation
The conditional expectation of X given Y = y is defined as
E[X|Y = y] =
x
xp(x|y) if X discrete ∫ dx xp(x|y) if Y continuous
0.4.4 Covariance between X and Y
The covariance between X and Y is defined as
Cov(X, Y ) = E[(x − E[X])(y − E[Y ])] = E[XY ] − E[X]E[Y ]
The univariate (1D) Gaussian or Normal distribution is given by
N (x|μ, σ^2 ) =
(2πσ^2 )^1 /^2
exp
2 σ^2
(x − μ)^2
It has an expected value or mean μ and variance σ^2. We call the inverse variance the precision.
The multivariate (ND) Gaussian or Normal distribution is given by
N (x|μ, Σ) =
(2π)N/^2 |Σ|^1 /^2
exp
(x − μ)TΣ−^1 (x − μ)
where x ∈ RN^ , μ ∈ RN^ and Σ ∈ RN^ ×N^ a positive definite matrix. It has expected value μ and covariance Σ.