联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Algorithm 算法作业Algorithm 算法作业

日期:2019-05-28 10:41

ECOM30002 Econometrics 2

Group Assignment 4

Deadline: 4pm, Tuesday May 28, 2019

Submission method: Electronically via the LMS

Weight: 7.5%

Material covered: Mainly Lectures 1-22 & Tutorials 1-11

Instructions

Group size: Minimum = 1, maximum = 4. Groups may be formed across different tutorials.

Group registration: Before submission, each group must register using the Group Assignment

Registration tool on the LMS. The deadline for group registration will be announced via the LMS.

Cover page: Each assignment must include a cover page listing the full name of every member

of the group along with their student ID and the name of their tutor.

Division of marks: Equal marks will be awarded to each member of a group.

Word processing: Assignments should be submitted as fully-typed documents in pdf or Word

format. Question numbers should be clearly indicated.

Statistical output: Raw R output is not acceptable. Regression output must be presented in

clearly labelled equation or table form. Figures should be presented on an appropriate scale,

labelled clearly and with an appropriate heading.

Inference: Unless you are instructed otherwise, all inference is to be conducted at the 5% level

of significance using heteroskedasticity-consistent (HC) standard errors.

Length of answers: The word limit is 600. Concise correct answers to questions requiring interpretation/discussion

will be valued over more lengthy unclear and/or off-topic attempts. Equations,

figures and tables do not count towards the word limit.

R script: You must append a complete copy of the R script that you have used to generate your

results. Your R script does not count towards the word limit.

1

Section 1: Conceptual Questions (30 marks)

(1.1) Marks available: 5

Consider the graphs below, which show the adjusted closing price for Yahoo shares traded

on the NASDAQ and quoted in US$ over the 296 trading days between September 1st, 1999

and October 31st, 2000, along with fitted values from two broken trend models. The data

was retrieved from Yahoo Finance.

Model A contains a single trend break and can be written as follows:

yt = α0 + αtt + α1DTt + Ut

DTt =(0 if t ≤ TBt TB if t > TB

where 1 < TB < Twhere t = 1, 2, . . . , T, yt denotes the Yahoo stock price at time t, DTt

is a broken trend

term and the error term Ut ~ i.i.d.(0, σ2

). Write out the general form of Model B in similar

notation, using general terminology such as TB,1, TB,2 etc. to denote the dates of trend

breaks. Explain why Model B fits the data better than Model A. What will happen to the

fit of the model as the number of trend breaks gets closer to the sample size and why?

(1.2) Marks available: 5

The Slutsky-Yule effect is the observation that simple dynamic transformations applied to

sequences of random numbers may generate cyclical patterns. To demonstrate the SlutskyYule

effect, write an R script to generate a sample of 2,000 observations drawn independently

from a N(0,12

) distribution using set.seed(42). Denoting this series of normal random

numbers Vt

, compute the following moving sums:

, as well as their respective autocorrelation functions

(ACFs) for lags 1 to 40. Briefly summarise what happens to the behaviour of the moving

sum process as the number of periods over which the sum is taken increases. Why do you

think that the Slutsky-Yule effect has been so influential in the development of the theory of

business cycles?

1Hint: The moving sums can be calculated easily using for loops. However, if you prefer, the function rollsum()

in the package zoo can be used to compute the moving sums directly.

2

(1.3) Marks available: 10

Write an R script to generate a time series yt

, t = 1, 2, . . . , T, according to the following data

generating process (DGP):yt = 10 + 5Dt + Vt (DGP)

where T = 2000, Dt = 1 for t ≥ 1000 and 0 otherwise and where Vt ~ N(0, 1

2), with set.seed(42).

2 Using your generated time series, yt

, estimate the following two models by OLS:

yt = α0 + α1Dt + vt (Model 1)

yt = δ0 + δ1yt1 + zt (Model 2)

(a) Model 2 can be written equivalently in the following form:

yt = β0 + wt (Model 2B)

wt = φ1wt1 + zt

Show how the parameters β0 and φ1 can be obtained from δ0 and δ1.

3

(b) Using your OLS estimation results for Model 2, compute values of the parameters β0

and φ1 for Model 2B. Tabulate the estimated parameters for Models 1, 2 and 2B (don’t

worry about their standard errors).

(c) Concisely interpret the estimated parameters for Models 1 and 2B and explain how the

coefficients of each model are related to the coefficients of the DGP.

(d) Plot the first 10 autocorrelations and partial autocorrelations for both ?vt and wt (note that wt is not the same as zt). Briefly explain why the pattern of autocorrelation in vt

is different to that in wt even though the disturbance term in the DGP, Vt

, is serially

uncorrelated by construction.

(1.4) Marks available: 10

Consider the following data generating process (DGP): For a given value of φ, this DGP is used to generate R = 50, 000

repeated samples of the time series yt

, each of which contains t = 1, 2, . . . , T observations.

Let y(r)t denote the r-th sample of yt

. For each repeated sample, a simple AR(1) model is

estimated by OLS and the AR(1) parameter estimate φ(r)

is saved. The mean of the resulting

R OLS parameter estimates is then computed as follows:

and the bias is computed as:

BIAS(φ) = MEAN(φ) φ

2Hint: You may find the rep() command useful for generating Dt.

3Hint: You may find it easiest to start with Model 2B, re-arrange it into the form of Model 2 and then work out

how the parameters of the two specifications relate to one-another.

3

Figure 1 is plotted by computing BIAS(φ?) for different values of the population AR(1)

parameter φ ∈ {0.1, 0.3, 0.5, 0.7, 0.9} and for different sample sizes T ∈ {10, 15, . . . , 100}.

Use Figure 1 to answer the following questions:

(a) Concisely interpret the information depicted in Figure 1.

(b) What does this experiment imply about the estimation of the autoregressive coefficient

in an AR(1) model by OLS in finite samples? Explain your answer.

(c) What does this experiment imply about the analysis of autocorrelation in finite samples?

Explain your answer.

Figure 1: Bias of the OLS Estimate of the AR(1) Parameter

Section 2: Empirical Questions (20 marks)

The file A4 Data.csv contains 73 annual observations from 1946 to 2018 on the following variables

for the US economy:

- inct

: real disposable personal income, in billions of US Dollars at 2012 prices

- const

: real personal consumption expenditures, in billions of US Dollars at 2012 prices

Both series were downloaded from the Federal Reserve Economic Data Service.

(2.1) Marks available: 6

Consider the following AR(p) model in log(const):

where μ is the regression intercept, φj is the j-th autoregressive parameter and Ut ~i.i.d.(0, σ2). The model is to be estimated over the period 1946 to 2015, with the final

three years of the sample being withheld for forecast evaluation.

(i) Compute and report values of the Akaike Information Criterion (AIC) for p = 1, 2, . . . , 5.

(ii) Use the AIC to determine the optimal lag order of the autoregression, p. Plot the residual

autocorrelation function (ACF) for your chosen AR(p) model, setting lag.max=10.

Interpret the residual ACF.

4

(iii) Which lag order would you select if your goal was to maximise the unadjusted R2

Explain your answer. Why is this not a sensible procedure for lag order selection in

autoregressive models in practice?

(2.2) Marks available: 3

Regardless of the model that you selected in question (2.1), use the AR(2) model estimated

over 1946 to 2015 to generate forecasts for ? log(const) for the years 2016, 2017 and 2018.

Show your calculations and concisely interpret your forecasts. You do not need to compute

standard errors for your forecasts.

(2.3) Marks available: 5

Consider the following VAR(p) model in 

where μ is a vector of intercepts, φj is the j-th autoregressive parameter matrix, Ut ~

i.i.d.(0, Σ) and Σ is the residual covariance matrix. The model is to be estimated over the

period 1946 to 2015, with the final three years of the sample being withheld for forecast

evaluation.

(i) Compute and report values of the Akaike Information Criterion (AIC) for p = 1, 2, . . . , 5.

(ii) Use the AIC to determine the optimal lag order of the VAR, p. Plot the residual ACFs

for both equations of your chosen VAR(p) model, setting lag.max=10. Interpret the

residual ACFs.

(iii) Consider the following ARDL(p,q) model:

where μ is the regression intercept, φj is the j-th autoregressive parameter, λis the

`-th distributed lag parameter and Ut ~ i.i.d(0, σ2). In what sense is an ARDL(p,q)

model of this form less useful for forecasting than a VAR(p) model and why?

(2.4) Marks available: 3

Regardless of the model that you selected in question (2.3), use the VAR(2) model estimated

over 1946 to 2015 to generate forecasts for log(const) for the years 2016, 2017 and 2018.

Concisely interpret your forecasts. You do not need to compute standard errors for your

forecasts.

(2.5) Marks available: 3

Compute and report the root mean squared error for the forecasts generated from both the

AR(2) and VAR(2) models over the period 2016-2018. Which model provides better forecasts

according to this criterion? Explain your answer.

5


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp