联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Java编程Java编程

日期:2019-03-26 10:02

Homework 5 INF 552,

1. Multi-class and Multi-Label Classification Using Support Vector Machines

(a) Download the Anuran Calls (MFCCs) Data Set from: https://archive.ics.

uci.edu/ml/datasets/Anuran+Calls+%28MFCCs%29. Choose 70% of the data

randomly as the training set.

(b) Each instance has three labels: Families, Genus, and Species. Each of the labels

has multiple classes. We wish to solve a multi-class and multi-label problem.

One of the most important approaches to multi-class classification is to train a

classifier for each label. We first try this approach:

i. Research exact match and hamming score/ loss methods for evaluating multilabel

classification and use them in evaluating the classifiers in this problem.

ii. Train a SVM for each of the labels, using Gaussian kernels and one versus

all classifiers. Determine the weight of the SVM penalty and the width of

the Gaussian Kernel using 10 fold cross validation.1 You are welcome to try

to solve the problem with both standardized 2 and raw attributes and report

the results.

iii. Repeat 1(b)ii with L1-penalized SVMs.3 Remember to standardize4

the attributes.

Determine the weight of the SVM penalty using 10 fold cross validation.

iv. Repeat 1(b)iii by using SMOTE or any other method you know to remedy

class imbalance. Report your conclusions about the classifiers you trained.

v. Extra Practice: Study the Classifier Chain method and apply it to the above

problem.

vi. Extra Practice: Research how confusion matrices, precision, recall, ROC,

and AUC are defined for multi-label classification and compute them for the

classifiers you trained in above.

2. K-Means Clustering on a Multi-Class and Multi-Label Data Set

Monte-Carlo Simulation: Perform the following procedures 50 times, and report

the average and standard deviation of the 50 Hamming Distances that you calculate.

1How to choose parameter ranges for SVMs? One can use wide ranges for the parameters and a fine

grid (e.g. 1000 points) for cross validation; however,this method may be computationally expensive. An

alternative way is to train the SVM with very large and very small parameters on the whole training data

and find very large and very small parameters for which the training accuracy is not below a threshold (e.g.,

70%). Then one can select a fixed number of parameters (e.g., 20) between those points for cross validation.

For the penalty parameter, usually one has to consider increments in log(λ). For example, if one found that

the accuracy of a support vector machine will not be below 70% for λ = 10?3 and λ = 106

, one has to choose

log(λ) ∈ {3, 2, . . . , 4, 5, 6}. For the Gaussian Kernel parameter, one usually chooses linear increments,

e.g. σ ∈ {.1, .2, . . . , 2}. When both σ and λ are to be chosen using cross-validation, combinations of very

small and very large λ’s and σ’s that keep the accuracy above a threshold (e.g.70%) can be used to determine

the ranges for σ and λ. Please note that these are very rough rules of thumb, not general procedures.

2

It seems that the data are already normalized.

3The convention is to use L1 penalty with linear kernel.

4

It seems that the data are already normalized.

1

Homework 5 INF 552, 

(a) Use k-means clustering on the whole Anuran Calls (MFCCs) Data Set (do not split

the data into train and test, as we are not performing supervised learning in this

exercise). Choose k ∈ {1, 2, . . . , 50} automatically based on one of the methods

provided in the slides (CH or Gap Statistics or scree plots or Silhouettes) or any

other method you know.

(b) In each cluster, determine which family is the majority by reading the true labels.

Repeat for genus and species.

(c) Now for each cluster you have a majority label triplet (family, genus, species).

Calculate the average Hamming distance, Hamming score, and Hamming loss5

between the true labels and the labels assigned by clusters.

3. ISLR 10.7.2

4. Extra Practice: The rest of problems in 10.7.

5Research what these scores are. For example, see the paper A Literature Survey on Algorithms for

Multi-label Learning, by Mohammad Sorower.

2


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp