



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The problem of agnostic learning, where we do not assume that the optimal function belongs to a given finite collection f. The document focuses on developing bounds to ensure that the empirical risk is a good indicator of the true risk for every function in f. The concept of concentration inequalities, specifically markov's inequality and chernoff's bound, to understand how fast empirical means converge to their ensemble counterparts. The document also covers hoeffding's inequality, which provides a tighter bound for the probability of the difference between the sample mean and the population mean exceeding a given threshold.
Typology: Study notes
1 / 7
This page cannot be seen from the preview
Don't miss anything!
This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License †
1.1 Motivation
In the last lecture^1 we consider a learning problem in which the optimal function belonged to a nite class of functions. Specically, for some collection of functions Fwith nite cardinality |F| ≤ ∞, we have
min f ∈F R (f ) = 0 ⇒ f ∗^ ∈ F. (1)
This is almost always not the situation in the real-world learning problems. Let us suppose we have a nite collection of candidate functions F. Furthermore, we do not assume that the optimal function f ∗, which satises
R (f ∗) = inf f
R (f ) (2)
where the inf is taken over all measurable functions, is a member of F. That is, we make few, if any, assumptions about f ∗. This situation is sometimes termed as Agnostic Learning. The root of the word agnostic literally means not known. The term agnostic learning is used to emphasize the fact that often, perhaps usually, we may have no prior knowledge about f ∗. The question then arises about how we can reasonably select an f ∈ F in this setting.
1.2 The Problem
The PAC style bounds discussed in the previous lecture^2 , oer some help. Since we are selecting a function
based on the empirical risk, the question is how close is
Rn (f^ )^ to^ R^ (f^ )∀f^ ∈ F. In other words, we wish that the empirical risk is a good indicator of the true risk for every function in F. If this is case, the selection of f that minimizes the empirical risk
^ fn= arg min f ∈Fn
Rn (f^ )^ (3)
∗Version 1.3: Feb 28, 2012 9:30 am US/Central †http://creativecommons.org/licenses/by/3.0/ (^1) "Probably Approximately Correct (PAC) Learning" http://cnx.org/content/m16282/latest/ (^2) "Probably Approximately Correct (PAC) Learning" http://cnx.org/content/m16282/latest/
should also yield a small true risk, that is, R
fn
should be close to minf ∈F R (f ). Finally, we can thus
state our desired situation as
max f ∈Fn
Rn (f ) − R (f ) | > ε
< δ, (4)
for small values of ε and δ. In other words, with probability at least 1 − δ, |
Rn (f ) − R (f ) | > ε, ∀f ∈ F. In this lecture, we will start to develop bounds of this form. First we will focus on bounding
P
Rn (f ) − R (f ) | > ε
for one xed f ∈ F.
To begin, let us recall the denition of empirical risk for {Xi, Yi}ni=1 be a collection of training data. Then the empirical risk is dened as
^ Rn (f^ ) =
n
∑^ n
i=
` (f (Xi) , Yi). (5)
Note that since the training data {Xi, Yi}ni=1 are assumed to be i.i.d. pairs, the terms in the sum are i.i.d random variables. Let
Li = (f (Xi) , Yi). (6) The collection of losses {Li}ni=1 is i.i.d according to some unknown distribution (depending on the un- known joint distribution of (X,Y) and the loss function). The expectation of Li is E [
(f (Xi) , Yi)] = E [` (f (X) , Y )] = R (f ), the true risk of f. For now, let's assume that f is xed.
Rn (f )
n
∑^ n
i=
E [` (f (Xi) , Yi)] =
n
∑^ n
i=
E [Li] = R (f ) (7)
We know from the strong law of large numbers that the average (or empirical mean)
Rn (f ) converges
almost surely to the true mean R (f ). That is,
Rn (f ) → R (f ) almost surely as n → ∞. The question is how fast.
Concentration inequalities are upper bounds on how fast empirical means converge to their ensemble coun-
terparts, in probability. The area of the shaded tail regions in Figure 1 is P
Rn (f ) − R (f ) | > ε
. We
are interested in nding out how fast this probability tends to zero as n → ∞.
loose bound. According to the Central Limit Theorem
^ Rn (f ) =
n
∑^ n
i=
Li → N
R (f ) , σ^2 L n
as n → ∞ (11)
in distribution. This suggests that for large values of n,
Rn (f ) − R (f ) | ≥ ε
e
− nε 2 σ^22 L
That is, the Gaussian tail probability is tending to zero exponentially fast.
Note that for any nonnegative random variable Z and t > 0 ,
P (Z ≥ t) = P
esZ^ ≥ est
esZ^
est^
, ∀s > 0 by Markov's inequality. (13)
Cherno's bound is based on nding the value of s that minimizes the upper bound. If Z is a sum of independent random variables. For example, say
∑^ n
i=
(` (f (Xi) , Yi) − R (f )) = n
Rn (f^ )^ −^ R^ (f^ )
then the bound becomes
Pn i=1(Li−E[Li])
Thus, the problem of nding a tight bound boils down to nding a good bound for E
ss(Li−E[Li])
Cherno ('52), rst studied this situation for binary random variables. Then, Hoeding ('63) derived a more general result for arbitrary bounded random variables.
Theorem 1: Hoeding's Inequality Let Z 1 , Z 2 , ..., Zn be independent bounded random variables such that Zi ∈ [ai, bi] with probability
∑n i=1 Zi. Then for any^ t >^0 , we have
P (|Sn − E [Sn] | ≥ t) ≤ 2 e
− (^) Pn^2 t^2 i=1 (bi−ai)^2. (16)
Proof: The key to proving Hoeding's inequality is the following upper bound: if Z is a random variable with E [Z] = 0 and a ≤ Z ≤ b, then
esZ^
≤ e
s^2 (b−a)^2 (^8). (17)
This upper bound is derived as follows. By the convexity of the exponential function,
esz^ ≤
z − a b − a
esb^ +
b − z b − a
esa, for a ≤ z ≤ b. (18)
Figure 2: Convexity of exponential function.
Thus,
esZ^
Z−a b−a
esb^ + E
b−Z b−a
esa = (^) b−ba esa^ − (^) b−aa esb^ , since E [Z] = 0 =
1 − θ + θes(b−a)
e−θs(b−a)^ , where θ = (^) b−−aa
Now let
u = s (b − a) and dene φ (u) ≡ −θu + log (1 − θ + θeu). (20)
Then we have
E
esZ^
1 − θ + θes(b−a)
e−θs(b−a)^ = eφ(u). (21)
To minimize the upper bound let's express φ (u) in a Taylor's series with remainder :
φ (u) = φ (0) + uφ'^ (0) +
u^2 2
φ''^ (v) for some v ∈ [0, u] (22)
φ'^ (u) = −θ + θe
u 1 −θ+θeu^ ⇒^ φ
' (^) (u) = 0
φ''^ (u) = θe
u 1 −θ+θeu^ −^
(θeu)^2 (1−θ+θeu)^2 = θe
u 1 −θ+θeu
1 − θe
u 1 −θ+θeu
= ρ (1 − ρ)
Now, φ''^ (u) is maximized by
ρ =
θeu 1 − θ + θeu^
⇒ φ''^ (u) ≤
Thus, we have shown that with probability at least 1 − 2 |F |e−^2 nε
2 , ∀f ∈ F
Rn (f ) − R (f ) | < ε. (31) And accordingly, we can be reasonably condent in selecting f from F based on the empirical risk
function
Rn.