Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

PwA Cheat Sheet: Common Distributions and Combinatorial Analysis, Cheat Sheet of Informatics Engineering

Cheat sheet on Progressive Web App (PwA) with Common Distributions and Combinatorial Analysis

Typology: Cheat Sheet

2019/2020

Uploaded on 11/27/2020

amlay
amlay 🇺🇸

4.1

(19)

384 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
PwA Cheatsheet
Common Distributions
Discrete
Name pmf cdf mean variance
Binomial(n,p) n
kpk(1p)nkF(k;n,p) = Pr(Xk) =
k
i=0n
ipi(1p)ninp np(1p)
Neg. Binomial(r,p) i1
r1pr(1p)ir-r
pr1p
p2
Bernoulli(p) q= (1p)for k=0
pfor k=1
0 for k<0
1pfor 0 k<1
1for k1
p p(1p)
Uniform(a,b) 1
n,n=ba+1k⌋−a+1
n
a+b
2
(ba+1)21
12
Geometric(p) p(1p)i11(1p)i1
p
1p
p2
Hypergeometric(N,K,n)
"k successes N, K suc N"
(K
k)(NK
nk)
(N
n)-nK
NnK
N
(NK)
N
Nn
N1
Poisson(λ)λkeλ
k!eλk
i=0
λi
i!λ λ
Continuous
Name pdf cdf mean variance
Uniform(a,b) 1
bafor x[a,b]
0 otherwise
0 for x<a
xa
bafor x[a,b)
1 for xb
a+b
2
(ba)2
12
Normal(µ,ω2)1
p2σ2πe(xµ)2
2σ21
21+erf xµ
σp2 µ σ2
Exponential(λ)λeλx1eλx1 12
Hazard/Failure Rate Functions
Survival Hazard Distribution Rate Book
¯
F(t) = 1F(t)λ(t) = f(t)
¯
F(t)F(t) = 1ex p{− t
0λ(t)d t}λp217
Events
Sample Space S={al l po ssi b le ou t co mes}
Event ES
Union (either or
both) EF
Intersection (both) EFor EF
Complement EC=S\EP(EC) = 1P(E)
Inclusion-
Exclusion ,P(AB) = P(A) + P(B)P(AB)
DeMorgan’s Law 1. (E1...En)C=EC
1. . . EC
n
2. (E1...En)C=EC
1. . . EC
n
Axioms
1. 0 P(E)1
2. P(S)1
3. For mutually excl. events Ai,i1:
P(
i=1Ai) =
i=1P(Ai)
Finite S, Equal Probability
for all point sets: P(A) =|A| ÷ | S|
Odds of Event α=P(A)
P(AC)=P(A)
1P(A)
Conditional Probability and Independence I
Conditional Proba-
bility P(F|E) = P(FE)
P(E)
Independence if P(FE) = P(F)P(E)
Multiplication Rule P(E1E2···En) = P(E1)P(E2|E1)···P(En|
E1···En1)
Bayes Formula
(simple) P(A|B) = P(B|A)P(A)
P(B)·
Bayes Formula
(full) P(Ai|B) = P(B|Ai)P(Ai)
jP(B|Aj)P(Aj)·
Conditional pmf
(discrete) pX|Y(x|y) = p(x,y)
pY(y)
Conditional pdf
(discrete) FX|Y(x|y) = axpX|Y(a|y)
Conditional Den-
sity (continuous) fX|Y(x|y) = f(x,y)
fY(y)
Conditional Prob-
abilities (continu-
ous)
P{XA|Y=y}=AfX|Y(x|y)d x
Random Variables (Discrete)
Distribution Func-
tion F(x) = P{Xx}
Probability Mass
Function p(x) = P X =x
Joint Probability
Mass Function
P(X=xand Y=y)
=P(Y=y|X=x)·P(X=x)
=P(X=x|Y=y)·P(Y=y)
Expectation E[X] = x:p(x)>0x p(x)
,not e :E[g(X)] = x:p(x)>0g(x)p(x)
Variance Var(X) = E[( XE[X])2]
=E[X2](E[X])2
Standard Deriva-
tion σ=pVa r(X)
Covariance Co v(X,Y) = E[(XE[X])(YE[Y])]
=E[X Y ]E[X]E[Y]
Moment Gener. Function M(t) = E[et X ](same for continuous RVs)
Random Variables (Continuous) I
Probability Density
Function fsuch that P{XB}=Bf(x)d x
Distribution Func-
tion Fsuch that d
dx F(x) = f(x)
Expectation E[X] =
−∞ x f (x)d x
,not e :E[g(X)] =
−∞ g(x)f(x)d x
Variance Var(X) = E[( XE[X])2]
=E[X2](E[X])2
Standard Deriva-
tion σ=pVa r(X)
Covariance Co v(X,Y) = E[(XE[X])(YE[Y])]
=E[X Y ]E[X]E[Y]
Joint Probability
Mass Function
P{(X,Y)C}=∫∫(x,y)C
f(x,y)d xd y
P{XA,YB}=BA
f(x,y)d xd y
Random Variables (Continuous) II
Marginal pmfs
fX(x) =
−∞
f(x,y)d y
fY(y) =
−∞
f(x,y)d x
More on Expectation, Variance, ..
E[X+Y] = E[X] + E[Y]
E[αX] = αE[X]
Va r(X+a) = Var (X)
Va r(aX +b) = a2Va r(X)
Va r(X+Y) = E[(X+Y)2](E[X+Y])2
=E[X2+2X Y +Y2](E[X] + E[Y])2
=E[X2] + 2E[X Y ] + E[Y2]
(E[X])22E[X]E[Y](E[Y])2
=Va r(X) + Var (Y)+ 2(E[X Y ]E[x]E[Y])
=Va r(X) + Var (Y)+ 2(C ov(X,Y))
Independence
E[f(X)g(Y)] = E[f(X)]E[g(Y)]
E[X Y ] = E[X]E[Y]
Co v(X,Y) = 0
Va r(X+Y) = Var (X)+ Va r(Y)
Correlation co r r(X,Y) = ρ(X,Y) = C ov(X,Y)
pVar (X)Var(Y)
1. 1ρ(X,Y)1
2. Independence ρ(X,Y) = 0
3.
Y=mX +cm,m=0a nd c :
m>0ρ(X,Y) = 1
m<0ρ(X,Y) = 1
E[X] = E[E[X|Y]]
Disc.: E[X] = yE[X|Y=y]P{Y=y}
Cont.: E[X] =
−∞ E[X|Y=y]fy(y)d y
pf2

Partial preview of the text

Download PwA Cheat Sheet: Common Distributions and Combinatorial Analysis and more Cheat Sheet Informatics Engineering in PDF only on Docsity!

PwA Cheatsheet

Common Distributions

Discrete Name pmf cdf mean variance

Binomial(n,p)

n k

pk^ ( 1 p)nk^

F (k; n, p) = Pr(X  k) = ∑⌊k⌋ i= 0

n i

pi^ ( 1 p)ni^ np np( 1 p)

Neg. Binomial(r,p)

i 1 r 1

pr^ ( 1 p)ir^ - rp r (^1) p 2 p

Bernoulli(p)

q = ( 1 p) for k = 0 p for k = 1

0 for k < 0 1 p for 0  k < 1 1 for k  1

p p( 1 p)

Uniform(a,b) (^1) n , n = b a + 1 ⌊k⌋ na+^1 a+ 2 b^ (ba+^1 )

(^2) 1 12 Geometric(p) (^) p( 1 p)i^1 1 ( 1 p)i^1 p^1 p 2 p Hypergeometric(N,K,n) "k successes  N, K suc 2 N"

(Kk )(Nn^ Kk ) (Nn )

  • n (^) NK n (^) NK^ (N^ NK)NN^ n 1

Poisson( λ ) λ k (^) e λ k! e λ^

∑⌊k⌋ i= 0

λ i i! λ^ λ

Continuous Name pdf cdf mean variance

Uniform(a,b)

ba for^ x^2 [a,^ b] 0 otherwise

0 for x < a xa ba for^ x^2 [a,^ b) 1 for x  b

a+b 2

(ba)^2 12

Normal( μ , ω^2 ) p^1 2 σ^2 π

e^

(x μ )^2 2 σ^2

1 + erf

€ (^) x μ σ p 2

μ σ^2 Exponential( λ ) λ e λ x^1 e λ x^1 1 ^2 Hazard/Failure Rate Functions Survival Hazard Distribution Rate Book ¯F (t) = 1 F (t) λ (t) = f ¯^ (t) F (t) F^ (t) =^1 ^ ex pf^

∫ (^) t 0 λ (t)d tg^ λ^ p

Events

Sample Space S = fall possi ble out comesg Event E  S Union (either or both)

E [ F

Intersection (both) E \ F or E F Complement EC^ = SnE ) P(EC^ ) = 1 P(E) Inclusion- Exclusion ,!^ P(A^ [^ B) =^ P(A) +^ P(B)^ ^ P(A^ ^ B)

DeMorgan’s Law

  1. (E 1 [... [ En)C^ = E 1 C ... \ ECn
  2. (E 1 ... \ En)C^ = E 1 C [... [ ECn

Axioms

1. 0  P(E)  1
2. P(S) 1
  1. For mutually excl. events Ai , i  1: P([^1 i= 1 Ai ) =

i= 1 P(Ai^ ) Finite S, Equal Probability for all point sets: P(A) =j A j  j S j

Odds of Event α = (^) PP((AAC) (^) ) = (^1) P(PA()A)

Conditional Probability and Independence I

Conditional Proba- bility P(F j E) = P( PF(^ \EE))

Independence if P(F \ E) = P(F )P(E)

Multiplication Rule P(E 1 E 2    En) = P(E 1 )P(E 2 j E 1 )    P(En j E 1    En 1 ) Bayes Formula (simple) P(A^ j^ B) =^

P(BjA) P(A) P(B)  Bayes Formula (full) P(Ai j B) = ∑P(BjAi^ )^ P(Ai^ ) j P(BjA^ j )^ P(A^ j )^

Conditional pmf (discrete) pX jY (x j y) = p p(x,^ y) Y (^ y) Conditional pdf (discrete) FX jY (x j y) =

ax pX jY (a^ j^ y)

Conditional Den- sity (continuous) fX jY (x j y) = f^ f(Yx (, yy))

Conditional Prob- abilities (continu- ous)

PfX 2 A j Y = yg =

A fX^ jY^ (x^ j^ y)d x

Random Variables (Discrete)

Distribution Func- tion F (x) = PfX  xg Probability Mass Function p(x) = PX = x

Joint Probability Mass Function

P(X = x and Y = y) = P(Y = y j X = x)  P(X = x) = P(X = x j Y = y)  P(Y = y) Expectation E[X ] =

x:p(x) > 0 x p(x) ,! not e : E[g(X^ )] =^

x:p(x) > 0 g(x)p(x) Variance

Var(X ) = E[(X E[X ])^2 ] = E[X 2 ] (E[X ])^2 Standard Deriva- tion σ^ =^

p Var(X )

Covariance

C ov(X , Y ) = E[(X E[X ])(Y E[Y ])] = E[X Y ] E[X ]E[Y ] Moment Gener. Function M (t) = E[et X^ ] (same for continuous RVs)

Random Variables (Continuous) I

Probability Density Function f such that PfX 2 Bg =

B f^ (x)d x Distribution Func- tion F^ such that^

d d x F^ (x) =^ f^ (x) Expectation (^) E[X ] =

1 x f^ (x)d x ,! not e : (^) E[g(X )] =

1 g(x)^ f^ (x)d x Variance

Var(X ) = E[(X E[X ])^2 ] = E[X 2 ] (E[X ])^2 Standard Deriva- tion σ^ =^

p Var(X )

Covariance

C ov(X , Y ) = E[(X E[X ])(Y E[Y ])] = E[X Y ] E[X ]E[Y ]

Joint Probability Mass Function

Pf(X , Y ) 2 Cg =

(x, y) 2 C

f (x, y)d x d y

PfX 2 A, Y 2 Bg =

B

A

f (x, y)d x d y

Random Variables (Continuous) II

Marginal pmfs

fX (x) =

f (x, y)d y

fY ( y) =

f (x, y)d x

More on Expectation, Variance, ..

E[X + Y ] = E[X ] + E[Y ]

E[ α X ] = α E[X ] Var(X + a) = Var(X ) Var(aX + b) = a^2 Var(X ) Var(X + Y ) = E[(X + Y )^2 ] (E[X + Y ])^2 = E[X 2 + 2 X Y + Y 2 ] (E[X ] + E[Y ])^2 = E[X 2 ] + 2 E[X Y ] + E[Y 2 ] (E[X ])^2 2 E[X ]E[Y ] (E[Y ])^2 = Var(X ) + Var(Y ) + 2 (E[X Y ] E[x]E[Y ]) = Var(X ) + Var(Y ) + 2 (C ov(X , Y ))

Independence

) E[ f (X )g(Y )] = E[ f (X )]E[g(Y )] ) E[X Y ] = E[X ]E[Y ] ) C ov(X , Y ) = 0 ) Var(X + Y ) = Var(X ) + Var(Y ) Correlation cor r(X , Y ) = ρ (X , Y ) = pC ov(X^ ,Y^ ) Var(X )Var(Y )

  1. 1  ρ (X , Y )  1
  2. Independence ) ρ (X , Y ) = 0

Y = mX + cm, m ̸= 0 and c : m > 0 ) ρ (X , Y ) = 1 m < 0 ) ρ (X , Y ) = 1 E[X ] = E[E[X j Y ]] Disc.: E[X ] =

y E[X^ j^ Y^ =^ y]PfY^ =^ yg Cont.: E[X ] =

1 E[X^ j^ Y^ =^ y]^ f^ y^ (^ y)d y

Combinatorial Analysis

Order matters and k = n Permutation Order does matter and k < n Variation Order does not matter and k < n Combination

Counting

Basic Counting Principle

Experiments E 1 , E 2 , ..Er with n 1 , n 2 , ..nr pos- sible outcomes. Total outcomes:

∏r i ni Permutations (without Repeats) n! = n  (n 1 ) ...  1

Permutations (with Repeats)

n! k! =^ n^ ^ (n^ ^1 )^ ^...^ ^ (k^ +^1 ) Variations (without Repeats) n^ ^ (n^ ^1 )^ ^...^ ^ (n^ ^ k^ +^1 ) =^

n! (nk)!

Variations (with Repeats) |^ n ^.. .{z^ ^ n} k-times

= nk

Combinations (without Repeats) "Binomial Coefficient"

n! (nk)! k! =^

n(n 1 )(n 2 )...(nk+ 1 ) k! =^

(^) n nk

n k

Multinominal Coef- ficient

n! n 1 !n 2 !...nr! =^

(^) n! n 1 ,n 2 ,...,nr

"divide n into r non-overlapping subgroups of sizes n1,n2,.." Combinations (with Repeats)

(n+k 1 )! (n 1 )! k! =^

n+k 1 k

n+k 1 n 1

Limit Theorems

Central Limit Theo- rem

Zn = ((X 1 + X 2 +... + Xn) n μ ) σ p n Then as n! 1

P(Zn  x)!

p 2 π

ex p(

u^2 )du

i.e.P(Zn  x)! P(Y  X ) where Y  N (0, 1)

Weak Law of Large Numbers

E[Xi ] = μ Var(Xi ) = σ^2

sn =

n (X 1 +... + Xn)

then for any ε > 0 lim n! P(j sn μ j > ε ) = 0

Strong Law of Large Numbers Pflimn!1(X 1 + X 2 +... + Xn)  n = μ g = 1

Markov’s Inequal- ity pfx^ ^ ag ^

E[X ] a

Chebyshev’s In- equality

E[Y 2 ] < 1 , 8 a > 0.

P(j Y j

a^2

E[Y 2 ])

,! (^) Pfj X μ j ag  σ

2 a^2 One-sided Cheby- shev (mean 0) PfX^ ^ ag ^

σ^2 σ^2 + a^2

Chernoff Bounds

PfX  ag  et a^ M (t) t > 0 PfX  ag  et a^ M (t) t < 0

Markov Chains

Discrete

Pi, j = P(system is in state j at time n + 1 j system in state i at time n)

Transition Matrix: P =

B
B
B

p1,1 p1,2    p1,n p2,1 p2,2    p2,n .. .

pn,1 pn,2    pn,n

C
C
C
A

Probability vector π (n): Probabilities that we are in state i at n.

π (n+^1 )^ = π (n)^ P π (n)^ = π (^0 )^ Pn

A Markov Chain is ergodic (aperiodic and irreducible) iff there exists n 2 N+^ such that Pn^ has no zero entries. It then has a Steady State Probability vector π = limn!1 π (n)^ independent of π (^0 ).

  1. π 0 + π 1 +... + π N 1 = 1
  2. π = π P

Continuous

Poisson

P( Ne (t) = k) = ( λ t)k k!

e λ t^ if

  1. For any fixed t, Ne (t) is a discrete RV
  2. Ne ( 0 ) = 0
  3. of events in disjoint intervals are independent

  4. Ne (t + h) Ne (t) = # of events in [t, t + h] for h! 0
  5. P( Ne (h) = 1 ) = P(event occurs in [t, t + h]) = λ h + E(h) (E(h) / h! 0, as h! 0)
  6. (^1) h P( Ne (h)  2 )! 0, as h! 0

Birth-Death

Birth Rates λ i,i+ 1 = bi Death Rates λ i,i 1 = di λ i, j = 0, otherwise ! Have steady state prob. vector if bi s and di s are non-zero and we have a finite number of states.

  1. π 0 + π 1 +... + π N 1 = 1
  2. π j = b 0 b (^) j 1 d 0 d (^) j 1 π^0

M/M/S Queue

Customers arrive with Poisson Process rate λ , S servers, Service time exponentially distributed with mean (^1) μ. State j = j customers in queue,

bj = λ , dj =

j μ , j = 1, 2,... , S S μ , j  S ! Has steady state prob. vector if λ < S μ.

M/M/1 Queue

π j = ( 1 λμ )( λμ )j^ , Mean Queue Length E[J] = (^) μλ λ

Surprise, Uncertainty & Entropy

Entropy

H(X ) :=

k px (xk )log 2 px (xk ) ( 0 log 2 ( 0 ) := 0 ) Surprise S(X = xk ) = log 2 px (xk )

,! Properties

  1. S( 1 ) = 0 ̸= S( 0 ) (which is undefined)
  2. S decreases: p < q ) S(q) < S(p)
  3. S(pq) = S(p) + S(q) If S is continuous and these are satisfied, 9C > 0. 8 p 2 [0, 1], S(p) = C log(p) Average Uncer- tainty
H(X , Y ) :=

j

k pX^ ,Y^ (x^ j^ ,^ yk^ )log^2 pX^ ,Y^ (x^ j^ ,^ yk^ )

Uncertainty of X given Y

HY = yk (X ) :=

j

pX j(Y = yk )(x (^) j )log 2 pX j(Y = yk )(x (^) j )

Conditional En- tropy

HY (X ) :=

k HY = yk (X^ )pY (^ yk )

Coding Theory

Code C A map from fxk g  R into sequences of 0’s and 1’s. Sequences are called code words. Code Word length xk 7! 0111 ) n + k = Expected Length of Code C E[C] =^

k nk^ pk^ =^

k nk^ P(X^ =^ xk^ )

Acceptable Code

No code word extends another one:

x 1 7! 0 x 1 7! 0 x 2 7! 00 x 2 7! 10

Noiseless Coding Theorem

For any acceptable code assigning nk bits to xk the following holds:

E[C] =

k

nk pk  H(X ) =

k

pk log 2 pk

Where pk = P(X = xk ) and nk length of a codeword associated with xk

,! Thrm

For any discrete RV X there exists an accept- able code with the expected length E[C] = L such that

H(X )  L < H(X ) + 1

Algorithm for finding an acceptable code with expected length H(X )  E[C] = L < H(X ) + 1 for discrete RV X:

  1. Let nj be the integer satisfying log 2 pj  nj < log 2 pj + 1
  2. Find any acceptable code assigning nj bits to x (^) j

There is no unique nearly-optimal code in general. Optimal or nearly-optimal coding depends on the pmg of X.

Common Moment Generating Functions M(t)

Binomial (pet^ + 1 p)n Neg. Binomial [(pet^ )  ( 1 ( 1 p)et^ )]r Poisson ex p( λ (et^ 1 )) Uniform (et b^ et a^ )  (t(b a)) Exponential λ  ( λ t) Normal ex p( μ t + (( σ^2 t^2 )  2 )