Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Homework 3 Solutions - Neural Networks | CS 547, Assignments of Computer Science

Material Type: Assignment; Class: Neural Networks; Subject: Computer Science; University: University of New Mexico; Term: Unknown 1989;

Typology: Assignments

Pre 2010

Uploaded on 07/23/2009

koofers-user-xjo
koofers-user-xjo 🇺🇸

10 documents

1 / 8

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CS547, Neural Networks: Homework 3
Christopher E. Davis - chris2d@cs.unm.edu
University of New Mexico
1 Theory Problem - Perceptron
a Give a proof of the Perceptron Convergence Theorem anyway you can.
Let z(n) be the x(n) space transformed to include the bias. Recall from the text and
lecture that the perceptron learning rule is given by:
ˆw(n+ 1) = ˆw(n) + ηδ(n)z(n) (1)
where δ(n) is 1 if the nth example is misclassified by the current weight vector and 0
otherwise.
Now let ˆz(n) be z(n) normalized, namely:
ˆz(n) = z(n)
||z(n)|| (2)
Let w+be a solution vector such that wT
+ˆz(n)1 for all n. Now let us suppose that
ˆz(n) is misclassified. Consider the distance between the new weight vector and the
solution vector. Set η= 1.
|| ˆw(n+ 1) w+||2=||( ˆw(n) + ˆz(n)) w+||2
=||( ˆw(n)w+) + ˆz(n)||2
=|| ˆw(n)w+||2+ z(n)T( ˆw(n)w+) + 1
=|| ˆw(n)w+||2+ z(n)Tˆw(n)2ˆz(n)Tw++ 1 (3)
Because ˆz(n) is misclassified we know that ˆz(n)Tˆw(n)0 and that ˆz(n)Tw+1. This
means that the two terms in Equation (3) when combined are 2.
|| ˆw(n+ 1) w+||2 ||( ˆw(n)w+||22 + 1
|| ˆw(n+ 1) w+||2 ||( ˆw(n)w+||21 (4)
Equation (4) shows that the distance between our new weight vector and the solution
vector decreases by at least one on every iteration. This means that the algorithm will
1
pf3
pf4
pf5
pf8

Partial preview of the text

Download Homework 3 Solutions - Neural Networks | CS 547 and more Assignments Computer Science in PDF only on Docsity!

CS547, Neural Networks: Homework 3

Christopher E. Davis - chris2d@cs.unm.edu

University of New Mexico

1 Theory Problem - Perceptron

a Give a proof of the Perceptron Convergence Theorem anyway you can.

Let z(n) be the x(n) space transformed to include the bias. Recall from the text and lecture that the perceptron learning rule is given by:

wˆ(n + 1) = wˆ(n) + ηδ(n)z(n) (1)

where δ(n) is 1 if the nth^ example is misclassified by the current weight vector and 0 otherwise. Now let ˆz(n) be z(n) normalized, namely:

zˆ(n) =

z(n) ||z(n)||

Let w+ be a solution vector such that wT + ˆz(n) ≥ 1 for all n. Now let us suppose that zˆ(n) is misclassified. Consider the distance between the new weight vector and the solution vector. Set η = 1.

|| wˆ(n + 1) − w+||^2 = ||( ˆw(n) + ˆz(n)) − w+||^2 = ||( ˆw(n) − w+) + ˆz(n)||^2 = || wˆ(n) − w+||^2 + 2ˆz(n)T^ ( ˆw(n) − w+) + 1 = || wˆ(n) − w+||^2 + 2ˆz(n)T^ wˆ(n) − 2ˆz(n)T^ w+ + 1 (3)

Because ˆz(n) is misclassified we know that ˆz(n)T^ wˆ(n) ≤ 0 and that ˆz(n)T^ w+ ≥ 1. This means that the two terms in Equation (3) when combined are ≤ −2.

|| wˆ(n + 1) − w+||^2 ≤ ||( ˆw(n) − w+||^2 − 2 + 1 || wˆ(n + 1) − w+||^2 ≤ ||( ˆw(n) − w+||^2 − 1 (4)

Equation (4) shows that the distance between our new weight vector and the solution vector decreases by at least one on every iteration. This means that the algorithm will

converge in at most || wˆ 0 − w+||^2 steps. Setting ˆw 0 = 0 the algorithm will converge in at most ||w+||^2 steps. Thus I have shown through algebraic manipulation that the perceptron will always converge in finite time if there is a solution for η = 1.

b What effect will varying the learning parameter have on convergence? Why? If I had originally defined w+ such that the inner product of it and every example to be ≥ b and had carried the η through the manipulations you would see that with a value of η < 2 b the algorithm converges to a solution in time ||w+||

2 η(2b−η) which is minimized for η = b. By observation varying η changes the speed of convergence. It seemed that larger η resulted in faster convergence, but not all the time.

2 Computer Problem - Perceptron Experimentation

a Using you’re your own code and data structures, implement a simulation of the Per- ceptron Learning Rule on a single neuron with two data inputs and one bias input. See attached code listing “main.c”.

b Plot the included training data as an x-y scatter plot and determine by visual inspec- tion if they are linearly separable.

By examining figure (1) it is obviously lineraly seperable.

c Train the Perceptron with the included training data. Produce plots of the three weights and the number of errors as a function of simulator epoch. What effect does randomizing the order of the training data have on the weight convergence. By examining figures (2 - 5) we see that randomization can minorly affect the time to convergence. There does not appear to be a clear trend towards better performance as far as I can see. As with most randomized approaches, we would expect the average behavior to be fairly nice if we present randomized orderings of data. Randomization can often help avoid a malicious ordering of data points.

d Run this multiple times with different presentation orders and initial weights, plotting the range of variation of error count as a function of epochs.

3 Computer Problem - Delta-Rule Experimentation

a Similar to above, using you Rre your own code and data structures again, implement a simulation of the Delta-Rule Learning Rule on a single neuron with two data inputs and one bias input. See attached code listing “ass3.m”.

-0.

0

1

2

0 0.5 1 1.5 2 2.5 3 3.5 4

Err w w w

Figure 2: Plot of weights and errors as a function of simulator epoch for a unique presentation of data

0

2

4

6

8

10

12

0 0.2 0.4 0.6 0.8 1

Err w w w

Figure 3: Plot of weights and errors as a function of simulator epoch for a unique presentation of data

0

2

4

6

8

10

0 0.2 0.4 0.6 0.8 1

Err w w w

Figure 4: Plot of weights and errors as a function of simulator epoch for a unique presentation of data

-0.

0

1

2

3

0 0.5 1 1.5 2 2.5 3

Err w w w

Figure 5: Plot of weights and errors as a function of simulator epoch for a unique presentation of data

-0.

0

1

0 5 10 15 20 25 30 35 40 45 50

Err w w w

Figure 8: Plot of weights and RMS error as a function of simulator epoch for a unique presentation of data

-0.

-0.

0

1

0 5 10 15 20 25 30 35 40 45 50

Err w w w

Figure 9: Plot of weights and RMS error as a function of simulator epoch for a presentation of data including a misclassified element

Figure 10: Plot of RMS error as a function of simulator epoch for a unique presentation of

      • Error
      • Error
      • Error
      • Error
      • Error