ÁªÏµ·½Ê½

  • QQ£º99515681
  • ÓÊÏ䣺99515681@qq.com
  • ¹¤×÷ʱ¼ä£º8:00-21:00
  • ΢ÐÅ£ºcodinghelp

Äúµ±Ç°Î»ÖãºÊ×Ò³ >> Java±à³ÌJava±à³Ì

ÈÕÆÚ£º2024-02-23 06:09

Biological Neural Computation

Homework problem set 2

Spring 2024

Data Assigned: 2/19/2024

Data Due: 3/08/2024

General Guidelines: The homework solutions should include figures that clearly

capture the result. The figures have to be labeled, well explained and the results

must be clearly discussed. When appropriate, it is recommended that you use

the Hypothesis ¨C Rationale ¨C Experiments/data ¨C Analysis ¨C Results ¨C

Discussion/Conclusions ¨C Limitation(s) framework to discuss your work.

The first sheet of the homework must certify that this is completely your

work and list the students/people you have consulted or received help from

(with your signature and date of submission). All online references used

must be listed in the reference section at the end of the homework.

Good luck,

Barani Raman

2

Points for BME 572 /BME 472 students

PointsforL415657 students

Problem 1. Implement the batch perceptron algorithm to obtain a linear discriminating

function as described in Chapter 5 of Duda et al Pattern Classification book. Create

linearly separable and non-linearly separable datasets with samples belonging to the two

classes. Apply your perceptron algorithm to discriminate. Report your observation and

analysis? Plot classification error vs. # of iterations, classification results, and the

obtained decision boundary.

[30 pts]

[50 pts]

Problem 2: Using the same datasets used in problem 1, now create a linear classifier

using Least Mean Squares (LMS) rule. Compare these results with the Perceptron

algorithm results.

[20 pts]

[50 pts]

3

For BME 572 students only

[50 pts]

Problem 3: Using back-propagation algorithm train a multilayer perceptron for the

problem of recognizing handwritten digits. A popular dataset (¡®mnist_all.dat¡¯) comprising

of training and testing samples of the different digits is provided in the homework folder.

Each sample is 28x28 gray scale 8-bit image.

Figure 1: Sample of the nine handwritten digits in the MNIST dataset.

Training:

The Matrix train0 has the training samples for digit ¡®0¡¯. Each row has 784 columns

corresponding to the 28x28 pixel (you can use reshape command to plot the digits; e.g.

imagesc(reshape(test0(1,:),28,28)') plots first training sample for digit 0¡¯¡¯.) Similarly,

there is one dataset corresponding to each digit. You will train your network using the

training samples only. You are free to choose a network of any size, and any non-linear

activation function. Also, you are free to use any preprocessing technique or

dimensionality reduction technique, or use only a subset of training samples, if you

would like to reduce the complexity of the neural network or the training process.

Initialize the weight vectors to a very small random number between 0 and 0.1. This will

help the network to converge better than equal weights or zero weights.

4

For non-linear activation two popular choices are the following:

Choice1: Logistic function

Choice2: Hyperbolic tangent function

[Note: a, b are constants]

Testing:

The Matrix test0 has the test samples for digit ¡®0¡¯. Similarly, there is one corresponding

to each digit. You will evaluate the performance of your network using the test samples

only.

Show the evolution of the prediction error as a function of training iteration, final

classification percentages for each digit, and the overall classification performance.

Discuss your findings.

For reference about this popular dataset take a look at: http://yann.lecun.com/exdb/mnist/

y j(n) = ? j(v j(n)) = 1

1+ exp(?av j(n))

a > 0 ?¡Þ < v j(n) < ¡Þ

? j

%(v j(n)) = ay j(n) 1? y j [ (n)]

? ¦Ä j(n) = e j(n)? j

%(v j(n)) = a d j(n) ? oj [ (n)]oj(n) 1? oj [ (n)] j ¡úoutput node

? ¦Ä h (n) = ?h

%(vh (n)) ¦Ä j(n)wjh (n)

j

¡Æ = ayh (n) 1? yh [ (n)] ¦Ä j(n)

j

¡Æ wjh (n) h ¡úhidden node

y j(n) = ? j(v j(n)) = atanh bv j ( (n)) a,b > 0

? j

#(v j(n)) = b

a

a ? y j [ (n)] a + y j [ (n)]

? ¦Ä j(n) = e j(n)? j

#(v j(n)) = b

a

d j(n) ? oj [ (n)] a ? oj [ (n)] a + oj [ (n)] j ¡úoutput node

? ¦Ä h (n) = ?h

#(vh (n)) ¦Ä j(n)wjh (n)

j

¡Æ = b

a

a ? yh [ (n)] a + yh [ (n)] ¦Ä j(n)

j

¡Æ wjh (n) h ¡úhidden node


°æȨËùÓУº±à³Ì¸¨µ¼Íø 2021 All Rights Reserved ÁªÏµ·½Ê½£ºQQ:99515681 ΢ÐÅ£ºcodinghelp µç×ÓÐÅÏ䣺99515681@qq.com
ÃâÔðÉùÃ÷£º±¾Õ¾²¿·ÖÄÚÈÝ´ÓÍøÂçÕûÀí¶øÀ´£¬Ö»¹©²Î¿¼£¡ÈçÓаæȨÎÊÌâ¿ÉÁªÏµ±¾Õ¾É¾³ý¡£ Õ¾³¤µØͼ

python´úд
΢ÐÅ¿Í·þ£ºcodinghelp