Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Computer Vision Lecture 4: Filtering and Noise Reduction, Lecture notes of Computer Vision

A part of the 'intro to computer vision' lecture series by greg shakhnarovich. In this lecture, the focus is on filters, specifically 2d cross-correlation filtering and convolution. The lecture covers mean and median filters, denoising using mean and median filters, and the gaussian noise model. The document also discusses the relationship between filter size (k) and standard deviation (σ) in gaussian filters.

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 50

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Intro to Computer Vision
Lecture 4
Greg Shakhnarovich
April 13, 2010
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32

Partial preview of the text

Download Computer Vision Lecture 4: Filtering and Noise Reduction and more Lecture notes Computer Vision in PDF only on Docsity!

Intro to Computer Vision

Lecture 4

Greg Shakhnarovich

April 13, 2010

Review: filters

2D cross-correlation filtering of image I(x, y) with

(2k + 1) × (2k + 1) filter H:

(I ⊗ H)(i, j) =

∑^ k

q=−k

∑k

r=−k

I(x + q, y + r) · H(q, r)

− 2 − (^1 0 1 )

2

1

0

1

− 2

H

Review: filters

2D cross-correlation filtering of image I(x, y) with (2k + 1) × (2k + 1) filter H:

(I ⊗ H)(i, j) =

∑^ k

q=−k

∑k

r=−k

I(x + q, y + r) · H(q, r)

j

i

− 2 − (^1 0 1 )

2

1

0

1

− 2

H

Review: filters

2D cross-correlation filtering of image I(x, y) with (2k + 1) × (2k + 1) filter H:

(I ⊗ H)(i, j) =

∑^ k

q=−k

∑k

r=−k

I(x + q, y + r) · H(q, r)

2D convolution:

(I ∗ H)(i, j) =

∑^ k

q=−k

∑k

r=−k

I(x − q, y − r) · H(q, r)

j

i

− 2 − (^1 0 1 )

2

1

0

1

− 2

H

Another noise model

Gaussian noise model

Additive noise model:

I^ ˜(x, y) = I(x, y) + ν(x, y)

Assumption I: ν(x, y) is a random variable, independent of (x, y)

  • When could this assumption be violated?

Assumption II: white noise, i.e., zero mean E[ν(x, y)] = 0

Gaussian noise:

p(ν; σ) =

2 πσ^2

exp

ν 2

2 σ^2

Refresher: the Gaussian distribution

(1D) Gaussian with mean μ and variance σ 2 :

p(x; μ, σ) =

2 πσ^2

exp

2 σ^2

(x − μ) 2

The mean determines location

The variance determines shape

x

p(x)

μ = 0, σ = 1 μ = 2

Refresher: the Gaussian distribution

(1D) Gaussian with mean μ and variance σ 2 :

p(x; μ, σ) =

2 πσ^2

exp

2 σ^2

(x − μ) 2

The mean determines location

The variance determines shape

x

p(x)

μ = − 3 μ = 0, σ = 1 μ = 2

Refresher: the Gaussian distribution

(1D) Gaussian with mean μ and variance σ 2 :

p(x; μ, σ) =

2 πσ^2

exp

2 σ^2

(x − μ) 2

The mean determines location

The variance determines shape

x

p(x)

μ = 0, σ = 1

μ = 2, σ =. 25

σ = 2

Refresher: the Gaussian distribution

(1D) Gaussian with mean μ and variance σ 2 :

p(x; μ, σ) =

2 πσ^2

exp

2 σ^2

(x − μ) 2

The mean determines location

The variance determines shape

x

p(x)

μ = 0, σ = 1

μ = 2, σ =. 25

Extremely widely used. Some reasons:

Removing Gaussian noise

noisy mean, k = 3 median, k = 3

Removing Gaussian noise

noisy mean, k = 3 median, k = 3

Gaussian filter, k = 3, σ = 1:

Gaussian filter - 1D

Mean filtering: convolve signal

with

−k 0 k

1 /(2k + 1)

Idea: instead,convolve with a filter shaped like a Gaussian

Gaussian filter - 1D

Mean filtering: convolve signal

with

−k 0 k

1 /(2k + 1)

Idea: instead,convolve with a filter shaped like a Gaussian

Actual digital filter Gσ is an approximation of the Gaussian function

p(x; 0, σ), sampled at integer coordinates x = −k,... , k.