Time Series_3_Generalisation

1. Linear and Log-Linear Trend Model

1.1 Assumptions: (1) E[ε] = 0;

          (2) ε is normally distributed;

          (3) E[ε2] = σ2 const; and violation of this is called 'heteroskedasticity' 异方差, when subsamples spread out more than rest of the sample (conditional/unconditional);

          (4) ε is iid; and violation is called 'autocorrelation' or 'serial correlation' (+/-);

          (5) independent variables is uncorrelated with residuals;

          (6) no linear relation between any two or more independent variables; and violation is called 'multicollinearity';

1.2 Deal with Heteroskedasticity

1.2.1 Scatter plots 

1.2.2 Breusch-Pagan chi-square test:

  (1) BP statistic = n × R2 (n = # of observations, df = k, k = # of independent variables, R2  is from a 2nd regression of ε2 from the 1st regression on the independent variables) 

  (2) This is regression of ε2 on the independent variables, if conditional hete-ity exisits, the ind-variables will significantly contribute to explanation of ε2.

  (3) One-tailed test. Hete-ity is only problem if R2 and BP test statistic are too large.

1.2.3 Correction

  Calculate 'robust standard errors' (or 'white-corrected standard errors' or 'hete-ity consistent standard error') and use it recalculate the t-statistics using original regression coeff.

1.3 Deal with Autocorrelation

1.3.1 Residual plots

1.3.2 Durbin-Watson statistic

  (1) DW ≈ 2(1 - r), where r is correlation coeff between residuals from one period and those from previous period;

  (2) DW = 2, then ε terms are homoskedastic and not serially correlated, i.e. r = 0; 

     DW > 2, means r < 0, thus ε terms nega-seri-correlated;

  (3) DB Decision Rule: H0 is 'There's no positive autocorrelation.'

            Reject H0 if DW < dlower, and conclude positive autocorrelation;

            Inconclusive between dlower and dupper;

            Don't reject if DW >  dupper;

1.3.3 Correction

  (1) Adjust coeff standard errors using Hansen method;

  (2) Improve specification of model;

1.4 Deal with Multicollinearity

1.4.1 Signal: When t-tests indicate 'none coeff significantly ≠ 0' while at same time F-test significant and R2 is high.

1.4.2 Correction: Omit one or more independent variables using stepwise regression.

2. Forecasting ModelsAutoregressive (AR) Models

2.1 First Differencing: transform data to covariance stationary series when it's random walk (i.e. hs unit root).

2.1.1 Stationarity(history is relevant for forecasting):

        Distribution doesn't change over time;

        E(Yt) and Var(Yt) is const;

        Cov(Yt, Yt-j) doesn't depend on t;

FirstAR model is correctly specified when no autocorrelation in ε terms, otherwise this AR need correction. 

2.2 Autoregressive Model AR(p)

2.1.1 Method one: Run AR model and examine autoc-ion.

2.1.2 Method two: Dickey Fuller test

2.1.3 Mean Reversion: if x> b0 / (1 - b1), the AR(1) predicts xt+1 will be lower than xt.

2.3 Autoregressive Distributed Lag Model ADL(p, r)

 

 

2.4 Model Selection

Criterion BIC (Bayesian Information Criterion) or AIC to decide optimal length of lag(p)

 

 

 Choose p that minimizes BIC.

3. Other Notations

3.1 Use Clustered HAC SE in panel data to correct for serial correlation, allowing errors to be correlated within a cluster but not across the clusters. While in time series data, use Newey-West HAC SE which estimates correlations between lagged values in time series.

Criterion:m (# of lags) = 0.75 T1/3, T is # of obsevation (or periods)

3.2 Seasonality

3.3 ARCH

ARCH exists if E[ε2] in one period is dependent on E[ε2] of previous period. Thus, standard errors of regression coeff in AR models and hypothesis tests of these coeff are in valid.

3.4 Cointegration: Two time series are economically linked (related to the same macro variables) or follow the same trend (relationship is not expected to change).

The ε term from regression of these two time series will be covariance stationary and t-tests will be reliable.

yt = b0 + b1xt + ε

The ε term are tested for unit root using Dickey Fuller test with critical t-values calculated by Engle and Granger (DF-EG test). If rejects NULL of unit root, then ε terms generated by these two time series are covariance stationary and they are cointegrated, thus we can use the regression to model their relationship.

 

posted on 2020-03-14 02:34  sophhhie  阅读(198)  评论(0编辑  收藏  举报