联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Matlab编程Matlab编程

日期:2024-04-26 01:15

ISE529 HW6

2024-04-17

1 Guidelines

Submission details: A reminder that you must “knit” the Rscript. (and output) using an R notebook (.Rmd file) to create a single PDF file you will hand in on Blackboard. Use the “Knit to PDF” dropdown command in RStudio, and make sure to include any plot output you generated as part of your homework. Also hand in the .Rmd file that created your pdf. As always, please include and comment your code to completely explain what you are thinking.

10 points will be awarded if your .Rmd file does knits to the same PDF you turned in in Blackboard. Note: The computer clusters in Waite Philips Hall (WPH) and Leavey Library (LVL) and free for students to use, and they have RStudio installed and will knit Rmd files to PDF (we just tested this). Make sure you do not hand in a PDF that was converted from a word doc.

There is a total of 60 possible points for this homework.

2 Installation and Environment Setup

Set up the necessary libraries and packages if necessary.

# Install and load necessary packages # install.packages("keras3") # keras3::install_keras(backend = "tensorflow") # reticulate::install_python("3.9") # # install.packages("tidyverse") # install.packages('caret') # install.packages("rsample")

3 Load Libraries

library(keras3) library(tidyverse) library(caret) library(rsample)

4 CIFAR-10 Dataset

CIFAR-10 is a dataset of 60,000 32x32 color images in 10 classes, with 6,000 images per class. A full description of the dataset can be found here: https://www.cs.toronto.edu/~kriz/.

cifar10 <- dataset_cifar10() x_train <- cifar10$train$x / 255 x_test <- cifar10$test$x / 255 y_train <- cifar10$train$y y_test <- cifar10$test$y dim(x_train)

4.1 Reshape and One-hot Encoding

Prepare the data for neural network input.

y_train <- to_categorical(y_train, 10) y_test <- to_categorical(y_test, 10)

5 Question 1: Basic Neural Network (30 points)

5.1 Defining the Neural Network Architecture

# Set the input shape based on CIFAR-10's 32x32x3 image format input_shape <- c(32, 32, 3) # Create the basic neural network model model_basic <- keras_model_sequential() %>% layer_flatten(input_shape = input_shape) %>% layer_dense(units = 512, activation = 'relu') %>% layer_dense(units = 10, activation = 'softmax') # Print model summary summary(model_basic)

5.2 Compiling, Training, and Evaluating the Model (10 points)

# Compile the model # Train the model # Evaluate the model on the test set

5.3 Improving the Model (20 points)

Do you think the model performance can be improved? Explore different techniques such as regularization, dropout and number of neurons in each layer to enhance the model’s performance. Refer to the notebook from lecture on how to implement these techniques. More information can be found here: https://tensorflow. rstudio.com/tutorials/keras/overfit_and_underfit.

5.3.1 Build an new model with a few (2-3) improvements such as regularization, dropout, etc. (10 points)

Things to try:

• Dropout layer: layer_dropout(rate = 0.4)

• Regularization: kernel_regularizer = regularizer_l2(0.001) as a parameter in the layer_dense function

• Adjusting the number of neurons in each layer

• Adding more layers

• Experiment with different activation functions

# Define the improved model architecture # Print model summary

5.3.2 Check the performance of the improved model (10 points)

# Compile and train the improved model # Evaluate the improved model on the test set # Compare the performance of the basic and improved models

How does the improved model perform. compared to the basic model? Discuss the results and any insights corresponding to the techniques you used to improve the model.

6 Question 2: Convolutional Neural Network (CNN) (20 points)

In this part, build a CNN to work with the CIFAR-10 dataset and compare its effectiveness against the basic neural network. Refer to the “Revisiting the MNIST Dataset with a Simple Convolutional Neural Network” section from the lecture notebook (April 18) for guidance.

6.1 Building the CNN (10 points)

6.1.1 Fill in the parameters "FILL_THIS_IN" to complete the CNN architecture (5 points)

# Define the CNN architecture model_cnn <- keras_model_sequential() %>% layer_conv_2d(filters = 32, kernel_size = c(3, 3), activation = 'relu', input_shape = "FILL_THIS_IN") %>% layer_max_pooling_2d(pool_size = "FILL_THIS_IN") %>% layer_conv_2d(filters = 64, kernel_size = c(3, 3), activation = 'relu') %>% layer_max_pooling_2d(pool_size = c(2, 2)) %>% layer_conv_2d(filters = 64, kernel_size = c(3, 3), activation = 'relu') %>% layer_flatten() %>% layer_dense(units = 64, activation = 'relu') %>% layer_dense("FILL_THIS_IN", activation = 'softmax') # Print the CNN model summary summary(model_cnn)

6.1.2 Explain the CNN Architecture (5 points)

Explain the architecture of the CNN model above. What are the layers used, and what do they do? Explain your reasoning behind the parameters you filled in ("FILL_THIS_IN") in the previous question.

6.2 Training and Evaluating the CNN (10 points)

# Compile the CNN model # Train the CNN model # Plot the training history

7 Conclusion (10 points)

Did the CNN model perform. significantly better? Discuss the results and explain how the convolutional neural network leverages the spatial information in the images to improve performance. Also, compare the performance of the CNN with the improved basic neural network model.





版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp