Friday, 2 March 2018

Panel Data Analysis (Lecture 2): How to Perform the Hausman Test in EViews


Introduction to Panel Data Models

The panel data approach pools time series data with cross-sectional data. Depending on the application, it can comprise a sample of individuals, firms, countries, or regions over a specific time period. The general structure of such a model could be expressed as follows:

Yit = ao + bXit + uit

where uit ~ IID(0, 𝜎2) and i = 1, 2, ..., N individual-level observations, and t = 1, 2, ...,T time series observations.

In this application, it is assumed that Yit is a continuous variable. In this model, the observations of each individual, firm or country are simply stacked over time on top of each another. This is the standard pooled model where intercepts and slope coefficients are homogeneous across all N cross-sections and through all T time periods. The application of OLS to this model ignores the temporal and spatial dimension inherent in the data and thus throws away useful information. It is important to note that the temporal dimension captures the ‘within’ variation in the data while the spatial dimension captures the ‘between’ variation in the data. The pooled OLS estimator exploits both ‘between’ and ‘within’ dimensions of the data but does not do so efficiently. Thus, in this procedure each observation is given equal weight in estimation. In addition, the unbiasedness and consistency of the estimator requires that the explanatory variables are uncorrelated with any omitted factors. The limitations of OLS in such an application prompted interest in alternative procedures. There are a number of different panel estimators but the most popular is the fixed effects (or ‘within’) estimator.

Fixed Effects or Random Effects?
The question is usually asked which econometric model an investigator should use when modelling with panel data. The different models can generate considerably different results and this has been documented in many empirical studies. In terms of a model where time effects are assumed absent for simplicity, the model to be estimated may be given by:

Yit = ai bXit + uit

The question, therefore, is do we treat ai as fixed or random? The following points are worth noting.

·1) The estimation of the fixed effects model is costly in terms of degrees of freedom. This is a statistical and not a computing cost. It is particularly problematic when N is large and T is small. The occurrence of large N and small T currently tends to characterize most panel data applications encountered.
·2) The ai terms are taken to characterize (for want of a better expression) investigator ignorance. In the fixed effects model does it make sense to treat one type of investigator ignorance (ai) as fixed but another as random (uit)?
·3) The fixed effects formulation is viewed as one where investigators make inferences conditional on the fixed effects in the sample.
 4)The random effects formulation is viewed as one where investigators make unconditional inferences with respect to the population of all effects.
· 5) The random effects formulation treats the random effects as independent of the explanatory variables (i.e. E(ai Xit) = 0). Violation of this assumption leads to bias and inconsistency in the b vector.

Advantage and disadvantage of the fixed effects model
The main advantage of the fixed effects model is its relative ease of estimation and the fact that it does not require independence of the fixed effects from the other included explanatory variables. The main disadvantage is that it requires estimation of N separate intercepts. This causes problems because much of the variation that exists in the data may be used up in estimating these different intercept terms. As a consequence, the estimated effects (the bs) for other explanatory variables in the regression model may be imprecisely estimated. These might represent the more important parameters of interest from the perspective of policy. As noted above the fixed effects estimator is derived using the deviations between the cross-sectional observations and the long-run average value for the cross-sectional unit. This problem is most acute, therefore, when there is little variation or movement in the characteristics over time, that is when the variables are rarely-changing or they are time-invariant. In essence, the effects of these variables are eliminated from the analysis.

Advantage and disadvantage of the random effects model
The main advantage of the random effects estimator is that it uses up fewer degrees of freedom in estimation and allows for the inclusion of time invariant covariates. The main disadvantage of the model is the assumption that the random effects are independent of the included explanatory variables. It is fairly plausible that there may be unobservable attributes not included in the regression model that are correlated with the observable characteristics. This procedure, unlike fixed effects, does not allow for the elimination of the omitted heterogeneous effects.

The Hausman Test
In determining which model is the more appropriate to use, a statistical test can be implemented. The Hausman test compares the random effects estimator to the ‘within’ estimator. If the null is rejected, this favours the ‘within’ estimator’s treatment of the omitted effects (i.e., it favours the fixed effects but only relative to the random effects). The use of the test in this case is to discriminate between a model where the omitted heterogeneity is treated as fixed and correlated with the explanatory variables, and a model where the omitted heterogeneity is treated as random and independent of the explanatory variables.

·      If the omitted effects are uncorrelated with the explanatory variables, the random effects estimator is consistent and efficient. However, the fixed effects estimator is consistent but not efficient given the estimation of a large number of additional parameters (i.e., the fixed effects).

·      If the effects are correlated with the explanatory variables, the fixed effects estimator is consistent but the random effects estimator is inconsistent. The Hausman test provides the basis for discriminating between these two models and the matrix version of the Hausman test is expressed as:

[bRE– bFE][V(bFE) – V(bRE)]-1[bRE – bFE]′ ~   𝝌²k

where k is the number of covariates (excluding the constant) in the specification. If the random effects are correlated with the explanatory variables, then there will be a statistically significant difference between the random effects and the fixed effects estimates. Thus, the null and alternative hypotheses are expressed as:

            H0: Random effects are independent of explanatory variables

H1: H0 is not true.

The null hypothesis is the random effects model and if the test statistic exceeds the relevant critical value, the random effects model is rejected in favour of the fixed effects model. In finite samples the inversion of the matrix incorporating the difference in the variance-covariance matrices may be negative-definite (or negative semi-definite) thus yielding non-interpretable values for the chi-squared.

The selection of one model over the other might be dictated by the nature of the application. For example, if the cross-sectional units were countries and states, it may be plausible to assume that the omitted effects are fixed in nature and not the outcome of a random draw. However, if we are dealing with a sample of individuals or firms drawn from a population, the assumption of a random effects model has greater appeal. However, the choice of which model to choose is ultimately dictated empirically. If it does not prove possible to discriminate between the two models on the basis of the Hausman test, it may be safest to use the fixed effects model, where the consequences of a correlation between the fixed effects and the explanatory variables are less devastating than is the case with the random effects model where the consequences of failure result in inconsistent estimates. Of course, if the random effects are found to be independent of the covariates, the random effects model is the most appropriate because it provides a more efficient estimator than the fixed effects estimator.

**This tutorial is culled from my lecture note as given by Prof. Barry Reilly (Professor of Econometrics, University of Sussex, UK).


How to Perform the Hausman Test in EViews
First: Load file into EViews and create Group data (see video on how to do this)

Second: Perform fixed effects estimation: Quick >> Estimate Equation >> Panel Options >> Fixed >> OK
EViews: Equation Estimation Dialog Box from cruncheconometrix.com.ng
EViews: Equation Estimation Dialog Box
Source: CrunchEconometrix

Third: Perform random effects estimation: Quick >> Estimate Equation >> Panel Options >> Random >> OK

Fourth: Perform the Hausman test: View >> Fixed/Random Effects testing >> Correlated Random Effects – Hausman Test

Fifth: Interpret results:
Reject the null hypothesis if the prob-value is statistically significant at 5% level. It implies that the individual effects (ai) correlate with the explanatory variables. Therefore use the fixed effect estimator to run the analysis. Otherwise, use the random effects estimator.

[Watch video tutorial on performing the Hausman test in EViews]

If you still have comments or questions regarding how to perform the Hausman test, kindly post them in the comments section below…..

Wednesday, 28 February 2018

Panel Data Analysis (Lecture 2): How to Perform the Hausman Test in Stata

 

Introduction to Panel Data Models

The panel data approach pools time series data with cross-sectional data. Depending on the application, it can comprise a sample of individuals, firms, countries, or regions over a specific time period. The general structure of such a model could be expressed as follows:


Yit = ao + bXit + uit

where uit ~ IID(0, 𝜎2) and i = 1, 2, ..., N individual-level observations, and t = 1, 2, ...,T time series observations.

In this application, it is assumed that Yit is a continuous variable. In this model, the observations of each individual, firm or country are simply stacked over time on top of each another. This is the standard pooled model where intercepts and slope coefficients are homogeneous across all N cross-sections and through all T time periods. The application of OLS to this model ignores the temporal and spatial dimension inherent in the data and thus throws away useful information. It is important to note that the temporal dimension captures the ‘within’ variation in the data while the spatial dimension captures the ‘between’ variation in the data. The pooled OLS estimator exploits both ‘between’ and ‘within’ dimensions of the data but does not do so efficiently. Thus, in this procedure each observation is given equal weight in estimation. In addition, the unbiasedness and consistency of the estimator requires that the explanatory variables are uncorrelated with any omitted factors. The limitations of OLS in such an application prompted interest in alternative procedures. There are a number of different panel estimators but the most popular is the fixed effects (or ‘within’) estimator.

Fixed Effects or Random Effects?
The question is usually asked which econometric model an investigator should use when modelling with panel data. The different models can generate considerably different results and this has been documented in many empirical studies. In terms of a model where time effects are assumed absent for simplicity, the model to be estimated may be given by:

Yit = ai + bXit + uit

The question, therefore, is do we treat ai as fixed or random? The following points are worth noting.

·  The estimation of the fixed effects model is costly in terms of degrees of freedom. This is a statistical and not a computing cost. It is particularly problematic when N is large and T is small. The occurrence of large N and small T currently tends to characterize most panel data applications encountered.
·   The ai terms are taken to characterize (for want of a better expression) investigator ignorance. In the fixed effects model does it make sense to treat one type of investigator ignorance (ai) as fixed but another as random (uit)?
·      The fixed effects formulation is viewed as one where investigators make inferences conditional on the fixed effects in the sample.
·  The random effects formulation is viewed as one where investigators make unconditional inferences with respect to the population of all effects.
·  The random effects formulation treats the random effects as independent of the explanatory variables (i.e. E(ai Xit) = 0). Violation of this assumption leads to bias and inconsistency in the b vector.

Advantage and disadvantage of the fixed effects model
The main advantage of the fixed effects model is its relative ease of estimation and the fact that it does not require independence of the fixed effects from the other included explanatory variables. The main disadvantage is that it requires estimation of N separate intercepts. This causes problems because much of the variation that exists in the data may be used up in estimating these different intercept terms. As a consequence, the estimated effects (the bs) for other explanatory variables in the regression model may be imprecisely estimated. These might represent the more important parameters of interest from the perspective of policy. As noted above the fixed effects estimator is derived using the deviations between the cross-sectional observations and the long-run average value for the cross-sectional unit. This problem is most acute, therefore, when there is little variation or movement in the characteristics over time, that is when the variables are rarely-changing or they are time-invariant. In essence, the effects of these variables are eliminated from the analysis.

Advantage and disadvantage of the random effects model
The main advantage of the random effects estimator is that it uses up fewer degrees of freedom in estimation and allows for the inclusion of time invariant covariates. The main disadvantage of the model is the assumption that the random effects are independent of the included explanatory variables. It is fairly plausible that there may be unobservable attributes not included in the regression model that are correlated with the observable characteristics. This procedure, unlike fixed effects, does not allow for the elimination of the omitted heterogeneous effects.

The Hausman Test
In determining which model is the more appropriate to use, a statistical test can be implemented. The Hausman test compares the random effects estimator to the ‘within’ estimator. If the null is rejected, this favours the ‘within’ estimator’s treatment of the omitted effects (i.e., it favours the fixed effects but only relative to the random effects). The use of the test in this case is to discriminate between a model where the omitted heterogeneity is treated as fixed and correlated with the explanatory variables, and a model where the omitted heterogeneity is treated as random and independent of the explanatory variables.

·      If the omitted effects are uncorrelated with the explanatory variables, the random effects estimator is consistent and efficient. However, the fixed effects estimator is consistent but not efficient given the estimation of a large number of additional parameters (i.e., the fixed effects).
·      If the effects are correlated with the explanatory variables, the fixed effects estimator is consistent but the random effects estimator is inconsistent. The Hausman test provides the basis for discriminating between these two models and the matrix version of the Hausman test is expressed as:

[bREbFE][V(bFE) – V(bRE)]-1[bREbFE]′ ~   𝝌²k

where k is the number of covariates (excluding the constant) in the specification. If the random effects are correlated with the explanatory variables, then there will be a statistically significant difference between the random effects and the fixed effects estimates. Thus, the null and alternative hypotheses are expressed as:

H0: Random effects are independent of explanatory variables
H1: H0 is not true.

The null hypothesis is the random effects model and if the test statistic exceeds the relevant critical value, the random effects model is rejected in favour of the fixed effects model. In finite samples the inversion of the matrix incorporating the difference in the variance-covariance matrices may be negative-definite (or negative semi-definite) thus yielding non-interpretable values for the chi-squared.

The selection of one model over the other might be dictated by the nature of the application. For example, if the cross-sectional units were countries and states, it may be plausible to assume that the omitted effects are fixed in nature and not the outcome of a random draw. However, if we are dealing with a sample of individuals or firms drawn from a population, the assumption of a random effects model has greater appeal. However, the choice of which model to choose is ultimately dictated empirically. If it does not prove possible to discriminate between the two models on the basis of the Hausman test, it may be safest to use the fixed effects model, where the consequences of a correlation between the fixed effects and the explanatory variables are less devastating than is the case with the random effects model where the consequences of failure result in inconsistent estimates. Of course, if the random effects are found to be independent of the covariates, the random effects model is the most appropriate because it provides a more efficient estimator than the fixed effects estimator.

**This tutorial is culled from my lecture note as given by Prof. Barry Reilly (Professor of Econometrics, University of Sussex, UK).

How to Perform the Hausman Test in Stata
First: Open a log file, load data into Stata, use a do-file (to replicate your research)

Second: Inform Stata that you are using a panel with ‘id’ the cross-sectional indicator and 'year' the time period indicator to prepare for panel data analysis.
xtset id year

Third: Create year dummies (to capture time variations in the data)
tab year, gen(yr)

Fourth: Run the fixed effects model and store the results
eststo fixed: xtreg y x1 x2 x3 x4 yr2 – yr..., fe i(c_id)

Fifth: Run the random effects model and store the results
eststo random: xtreg y x1 x2 x3 x4 yr2 – yr..., re i(c_id)

Sixth: Run the Hausman test
hausman fixed random

Seventh: Interpret results: Reject the null hypothesis if the prob-value is statistically significant at 5% level. It implies that the individual effects (ai) correlate with the explanatory variables. Therefore use the fixed effect estimator to run the analysis. Otherwise, use the random effects estimator.


[Watch video tutorial on performing the Hausman test in Stata]
If you still have comments or questions regarding how to perform the Hausman test, kindly post them in the comments section below…..

Monday, 26 February 2018

Time Series Analysis (Lecture 3): How to Perform Stationarity Test in Excel

What is Stationarity in Time Series Analysis?

In econometrics, time series data are frequently used and they often pose distinct problems for econometricians. As it will be discussed with examples, most empirical work based on time series data assumes that the underlying series is stationary. Stationarity of a series (that is, a variable) implies that its mean, variance and covariance are constant over time. That is, these do not vary systematically over time. In order words, they are time invariant. However, if that is not the case, then the series is nonstationary. We will discuss some possible scenarios where two series, Y and X, are nonstationary and the error term, u, is also nonstationary. In that case, the error term will exhibit autocorrelation. Another likely scenario is where Y and X are nonstationary, but u is stationary. The implications of this will also be explored. In time series analysis, the words nonstationary, unit root or random walk model are used synonymously. In essence, of a series is considered to be nonstationary, it implies that such exhibit a unit root and exemplifies a random walk series.

Regressing two series that are nonstationary, likewise, yields a spurious (or nonsense) regression. That is, a regression whose outcome cannot be used for inferences or forecasting. In short, such results should not be taken seriously and must be discarded. A stationary series will tend to return to its mean (called mean reversion) and fluctuations around this mean (measured by its variance) will have a broadly constant breadth. But if a time series is not stationary in the sense just explained, it is called a nonstationary time series such will have a time-varying mean or a time-varying variance or both. In summary, a stationary time series is important because if such is nonstationary, its behaviour can be studied only for the time period under consideration. That is, each set of time series data will therefore be for a particular episode. As a result, it is not possible to generalise its relevance to other time periods. Therefore, for the purpose of forecasting, such (nonstationary) time series may be of little practical value

How to detect unit root in a series?
In a bivariate (2 variables) model or that involving multiple variables (called a multiple regression model), it is assumed that all the variables are stationary at level (that is, the order of integration of each of the variable is zero, I(0). It is important to state at this point, that the order of integration of a series in a regression model is determined by the outcome of a unit root test (or stationarity test). If the series is stationary at level after performing unit root test, then it is I(0), otherwise it is I(d) where d represents the number of times the series is differenced before it becomes stationary. But what if the assumption of stationarity at level of the series in a bivariate or multiple regression model is relaxed and we consequently allow for a unit root in each of the variables in the model, how can this be corrected? In general, this would require a different treatment from a conventional regression with stationary variables at I(0).

In particular, we focus on a class of linear combination of unit root processes known as cointegrated process. The generic representation for the order of integration of series is I(d) where d is the number of differencing to render the series stationary. Hence, a stationary series at level, d = 0 is a series with an I(0) process. Although, for any non-stationary series, ‘d’ can assume any value greater than zero, however, in applied research, only the unit root process of I(1) process is allowed, otherwise such series with higher order of integration (d > 1) should be excluded in the model as no meaningful policy implications or relevance can be drawn from such series. 

Here is an example of a bivariate linear regression model:

                                            Y= 𝛂₀ + bXt + ut                                               [1]

Assume Yt and Xt  are two random walk models that are I(1) processes and are independently distributed as:   
                       Y= ρYt-1 +  vt,                     -1 ≤  ρ ≤ 1                                 [2]        
                       X= ղXt-1 +  et,                    -1 ≤  ղ ≤ 1                                 [3]

and vt and et have zero mean, a constant variance and are orthogonal (these are white noise error terms). 

We also assumed that vt and et are serially uncorrelated as well as mutually uncorrelated. As stated in [2] and [3], both these time series are nonstationary; that is, they are I(1) or exhibit stochastic trends. Suppose we regress Yt on Xt. Since Yt on Xt are uncorrelated I(1) processes, the R2 from the regression of on X should tend to zero; that is, there should not be any relationship between the two variables. Equations [2] and [3] resemble the Markov first-order autoregressive model. If ρ and ղ = 1, the equations become a random walk model without drift. If ρ and ղ are in fact 1, then a unit root problem surfaces, that is, a situation of nonstationarity; because we already know that in this case the variance of Yt is not stationary. The name unit root is due to the fact that ρ = 1. Again, the terms nonstationary, random walk, and unit root can be treated as synonymous. If, however, |ρ| ≤ 1, and  |ղ| ≤ 1, that is if their absolute values are less than one, then it can be shown that both series Yt and Xt are stationary. In practice, then, it is important to find out if a time series possesses a unit root.

Given equations [2] and [3], there should be no systematic relationship between Yt and Xt as they both drift away from equilibrium (i.e. they do not converge), and therefore, we should expect that an ordinary least squares (OLS) estimate of b should be close to zero, or insignificantly different from zero, at least as the sample size increases. But this is not usually the case. The fitted coefficients in this case may be statistically significant even when there is no true relationship between the dependent variable and the regressors. This is regarded as a spurious regression or correlation where, in the case of our example, b takes any value randomly, and its t-statistic indicates significance of the estimate.

But how can unit root be detected? There are some clues that tell you if a series is nonstationary and if the regression of bivariate or multivariate relationships are spurious. Some of these are:
1.  Do a graphical plot of the series to visualise the nature. Is it trending upwards or downwards? Does it exhibit a mean-reversion or not? Or are there fluctuations around its mean?
2.   Or carry out a regression analysis on two series and observe the R2. If it is above 0.9, it may suggest that the variables are nonstationary.
3.   The rule-of-thumb: if the R2 obtained from the regression is higher than the Durbin Watson (DW) statistic. The low DW statistic evidences positive first order auto-correlation of the error terms.

Using Gujarati and Porter Table 21.1 quarterly data from 1970q1 to 1991q4, examples of nonstationary series and spurious regression can be seen from the lnpce and lnpdi relationship. Since the series are measured in billions of US dollars, the natural logarithms of the variables will be used in analysing their essential features.

Nonstationary series: the graphical plot of the two variables shows an upward trend and none of the variables revert to their means. That is, the data generating process of both series does not evolve around zero. That clearly shows that the series are nonstationary.

Excel: Example of nonstationary series from cruncheconometrix.com.ng
Excel: Example of a nonstationary series
Source: CrunchEconometrix

Note: To generate the graph: Highlight the cells, go to Insert >> Recommended Charts >> All Charts >> Line

What is a spurious regression? Sometimes we expect to find no relationship between two variables, yet a regression of one on the other variable often shows a significant relationship. This situation exemplifies the problem of spurious, or nonsense, regression. The regression of lnpce on lnpdi shows how a spurious regression can arise if time series are not stationary. As expected, because both variables are nonstationary, the result evidences that a spurious regression has been undertaken.

Excel: Example of a spurious regression from cruncheconometrix.com.ng
Excel: Example of a spurious regression
Source: CrunchEconometrix
                      
                             [Watch video on how to compute the Durbin Watson d statistic]


As you can see, the coefficient of lnpdi is highly statistically significant, and the R2 value is statistically significantly different from zero. From these results, you may be tempted to conclude that there is a significant statistical relationship between both variables, whereas a priori there may or may not be none. This is simply the phenomenon of spurious or nonsense regression, first discovered by Yule (1926). He showed that (spurious) correlation could persist in nonstationary time series even if the sample is very large. That there is something wrong in the preceding regression is suggested by the extremely low Durbin–Watson value, which suggests very strong first-order autocorrelation. According to Granger and Newbold, R2 > DW is a good rule of thumb to suspect that the estimated regression is spurious, as in the given example.

Why is it important to test for stationarity?
We usually consider a nonstationary series for the following reasons:
1.  To evaluate the behaviour of series over time. Is the series trending upward or downward? This can be verified from performing a stationarity test. In other words, the test can be used to evaluate the stability or predictability of time series. If a series is nonstationary, that means the series is unstable or unpredictable and therefore may not be valid for inferences, prediction or forecasting.

2. To know how a series responds to shocks requires carrying out a stationarity test. If such series is nonstationary, the impact of shocks to the series are more likely to be permanent. Consequently, if a series is stationary, impact of shocks will be temporary or brief.

How to correct for nonstationarity of a series?
What can be done with nonstationarity in a time series knowing that performing OLS on such a model yields spurious regression?

The Unit Root Test
We begin with equations [2] and [3] which are unit root (stochastic) processes with white noise error terms. If the parameters of the models are equal to 1, that is, in the case of the unit root, both equations become random walk models without drift, which we know is a nonstationary stochastic process. So, what can be done to correct this? For instance, for equation [2], simply regress Yt on its (one-period) lagged value Yt−1 and find out if the estimated ρ is statistically equal to 1? If it is, then Yt is nonstationary. Repeat same for the Xt series. This is the general idea behind the unit root test of stationarity.

For theoretical reasons, equation [2] is manipulated as follows: Subtract Yt−1 from both sides of [2] to obtain:
                                           Yt  Yt-1 =  ρYt-1 - Yt-1 +  vt                        [4]
                                                         = (ρ - 1)Yt-1 +  vt
                                     
and this can be stated alternatively as:
⃤ Yt  = δYt-1 +  vt                                        [5]

where δ = (ρ − 1) and  ⃤, as usual, is the first-difference operator. In practice, therefore, instead of estimating [2], we estimate [5] and test the null hypothesis that δ = 0. If δ = 0, then ρ = 1, that is we have a unit root, meaning the time series under consideration is nonstationary.

Before we proceed to estimate [5], it may be noted that if δ = 0, [5] will become:

⃤ Yt  = Yt-1 - Yt-1 =  vt                     [6]

(Remember to do the same for Xt series)

Since vt is a white noise error term, it is stationary, which means that the first difference of a random walk time series is stationary.

Excel: Example of stationary series from cruncheconometrix.com.ng
Excel: Example of stationary series
Source: CrunchEconometrix
Visual observation of the differenced series shows that the three variables are stationary around the mean. They all exhibit constant mean-reversions. That is, they fluctuate around 0. If we are to draw a trend line, such a line will be horizontal at 0.01.

Okay, having said all that. Let us return to estimating equation [5]. This is quite simple, all that is required is to take the first differences of Yt and regress on Yt−1 and see if the estimated slope coefficient in this regression is statistically different from is zero or not. If it is zero, we conclude that Yt is nonstationary. But if it is negative, we conclude that Yt is stationary.

Note: Since δ = (ρ − 1), for stationarity ρ must be less than one. For this to happen δ must be negative!

The only question is which test do we use to find out if the estimated coefficient of Yt−1 in [5] is zero or not? You might be tempted to say, why not use the usual t test? Unfortunately, under the null hypothesis that δ = 0 (i.e., ρ = 1), the t value of the estimated coefficient of Yt−1 does not follow the t distribution even in large samples; that is, it does not have an asymptotic normal distribution.

What is the alternative? Dickey and Fuller (DF) have shown that under the null hypothesis that δ = 0, the estimated t value of the coefficient of Yt−1 in [5] follows the τ (tau) statistic. These authors have computed the critical values of the tau statistic on the basis of Monte Carlo simulations.

Note: Interestingly, if the hypothesis that δ = 0 is rejected (i.e., the time series is stationary), we can use the usual (Student’s) t test.

The unit root test can be computed under three (3) different null hypotheses. That is, under different model specifications such as if the series is a:
1.    random walk (that is, model has no constant, no trend)
2.    random walk with drift (that is, model has a constant)
3.    random walk with drift and a trend (that is, model has a constant and trend)

In all cases, the null hypothesis is that δ = 0; that is, there is a unit root and the alternative hypothesis is that δ is less than zero; that is, the time series is stationary. If the null hypothesis is rejected, it means that Yt is a stationary time series with zero mean in the case of [5], that Yt is stationary with a nonzero mean in the case of a random walk with drift model, and that Yt is stationary around a deterministic trend in the case of random walk with drift around a trend.

It is extremely important to note that the critical values of the tau test to test the hypothesis that δ = 0, are different for each of the preceding three specifications of the DF test, which are now computed by all econometric packages. In each case, if the computed absolute value of the tau statistic (|τ|) exceeds the DF or MacKinnon critical tau values, the null hypothesis of a unit root is rejected, in order words the time series is stationary. On the other hand, if the computed |τ| does not exceed the critical tau value, we fail to reject the null hypothesis, in which case the time series is nonstationary.

Note: Students often get confused in interpreting the outcome of a unit root test. For instance, if the calculated tau statistic is -2.0872 and the DF tau statistic is -3.672, you cannot reject the null hypothesis. Hence, the conclusion is that the series is nonstationary. But if the calculated tau statistic is -5.278 and the DF tau statistic is -3.482, you reject the null hypothesis in favour of the alternative. Hence, the conclusion is that the series is stationary.
*Always use the appropriate critical τ values for the indicated model specification.

How to Perform Unit Root Test in Excel (see for Stata and EViews)
Example dataset is from Gujarati and Porter Table 21.1
Several tests have been developed in the literature to test for unit root. Prominent among these tests are Augmented Dickey-Fuller, Phillips-Perron, Dickey-Fuller Generalised Least Squares (DFGLS) and so on. But this tutorials limits testing to the use of ADF and PP tests. Once the reader has good basic knowledge of these two techniques, they can progress to conducting other stationarity test on their time series variables.

How to Perform the Augmented Dickey-Fuller (ADF) Test
An important assumption of the DF test is that the error terms are independently and identically distributed. The ADF test adjusts the DF test to take care of possible serial correlation in the error terms by adding the lagged difference terms of the outcome (dependent) variable. For Yt series, in conducting the DF test, it is assumed that the error term vt is uncorrelated. But in case where it is correlated, Dickey and Fuller have developed a test, known as the augmented Dickey–Fuller (ADF) test. This test is conducted by “augmenting” the preceding three model specifications stated above by adding the lagged values of the dependent variable.

…so, let’s get started!

First step: get the Data Analysis Add-in menu
Before you begin, ensure that the DATA ANALYSIS Add-in is in your tool bar because without it, you cannot perform any regression analysis. To obtain it follow this guide:
File >> Options >> Add-ins >> Excel Options dialog box opens
Under Active Application Add-ins, choose Analysis ToolPak
In the Manage section, choose Excel Add-ins
Click Go, then OK

If it is correctly done, you should see this:
Excel Add-in Dialog Box from cruncheconometrix.com.ng
Excel Add-in Dialog Box
Source: CrunchEconometrix
…and you have the Data Analysis menu to your extreme top-right corner under Data menu.
Excel Add-in Icon from cruncheconometrix.com.ng
Excel Add-in Icon
Source: CrunchEconometrix

Second step: have your data ready
Using Gujarati and Porter Table 21.1 quarterly data from 1970q1 to 1991q4. We are only considering the series of pce in natural logarithms (because the variable is initially measured in US$ billions).

Remember, that the ADF equation is given as:

Yt  = δYt-1 +  vt                                        [5]

Hence, there is need to create 3 additional variables: the difference of lnpce, the lag of lnpce and the lagged difference of lnpce.

Note: The augmented Dickey–Fuller (ADF) test is conducted by “augmenting” the model specifications by adding the lagged values of the dependent variable.

Here is the data in excel format:
Excel: lnpce Workfile from cruncheconometrix.com.ng
Excel: lnpce Workfile
Source: CrunchEconometrix
Third step: Run the regression in “level”
Go to Data >> Data Analysis (dialogue box opens) >> Regression >> OK >> dialog box opens
Excel: Regression dialog box from cruncheconometrix.com.ng
Excel: Regression dialog box
Source: CrunchEconometrix

·      Put data range for dlnpce under Input Y Range
·      Put data range for lnpce_1 and dlnpce_1 under Input X Range
·      Check label box
·      Check Confidence Level box
·      Check Output range
·      Click OK
(You have simply told Excel to regress the dependent variable, dlnpce, on the explanatory variables, lnpce_1 and dlnpce_1), and the output is shown as:
Excel: Augmented Dickey-Fuller Result for nonstationarity from cruncheconometrix.com.ng
Excel: Augmented Dickey-Fuller Result for nonstationarity
Source: CrunchEconometrix
Excel: Augmented Dickey-Fuller Critical Values from cruncheconometrix.com.ng
Excel: Augmented Dickey-Fuller Result Critical Values
Source: CrunchEconometrix
Decision: The null hypothesis of a unit root cannot be rejected against the one-sided alternative hypothesis if the computed absolute value of the tau statistic is lower than the absolute value of the DF or MacKinnon critical tau values and we conclude that the series is nonstationary; otherwise (that is, if it is higher), then the series is stationary.

Decision: On the other hand, using the probability value, we reject the null hypothesis of unit root if the computed probability value is less than the chosen level of statistical significance.  

Fourth step: Run the regression in “first difference”
Having confirmed that lnpce is nonstationary, we need to run the test again using its first difference. So, the next thing to do is to generate the first difference of dlnpce (that is, D.dlnpce) and estimate the equation. The data for the first difference equation is shown here:
Excel: Dlnpce Workfile from cruncheconometrix.com.ng
Excel: Dlnpce Workfile
Source: CrunchEconometrix
And the output of the regression is shown as:
Excel: Augmented Dickey-Fuller Result for Stationarity from cruncheconometrix.com.ng
Excel: Augmented Dickey-Fuller Result for Stationarity
Source: CrunchEconometrix

After unit root testing, what next?
The outcome of unit root testing matters for the empirical model to be estimated. The following scenarios explain the implications of unit root testing for further analysis. 

Scenario 1:  When series under scrutiny are stationary in levels? 
If pce and pdi are stationary in levels, that is, they are I(0) series (integrated of order zero).  In this situation, performing a cointegration test is not necessary. This is because any shock to the system in the short run quickly adjusts to the long run. Consequently, only the long run model should be estimated.  That is, the model should be specified as: 

 pce= 𝛂₀ + bpdit + ut
In essence, the estimation of short run model is not necessary if series are I(0).  

Scenario 2: When series are stationary in first differences?
·     Under this scenario, the series are assumed to be non-stationary.
·      One special feature of these series is that they are of the same order of integration.
·  Under this scenario, the model in question is not entirely useless although the variables are unpredictable. To verify further the relevance of the model, there is need to test for cointegration.  That is, can we assume a long run relationship in the model despite the fact that the series are drifting apart or trending either upward or downward?
·   If there is cointegration, that means the series in question are related and therefore can be combined in a linear fashion. This implies that, even if there are shocks in the short run, which may affect movement in the individual series, they would converge with time (in the long run).
·   However, there is no long run if series are not cointegrated. This implies that, if there are shocks to the system, the model is not likely to converge in the long run.
·   Note that both long run and short run models must be estimated when there is cointegration.
·  The estimation will require the use of vector autoregressive (VAR) model analysis and VECM models.
·   If there is no cointegration, there is no long run and therefore, only the short run model will be estimated. That is, run only VAR no VECM analysis!
·  There are however, two prominent cointegration tests for I(I) series in the literature. They are Engle-Granger cointegration test and Johansen cointegration test.
·  The Engle-Granger test is meant for single equation model while Johansen is considered when dealing with multiple equations. 

Scenario 3: The series are integrated of different order? 
·  Should in case lnpce and lnpdi are integrated of different orders, like the second scenario, cointegration test is also required but the use of either Engle-Granger or Johansen cointegration are no longer valid.
·    The appropriate cointegration test to apply is the Bounds test for cointegration and the estimation technique is the autoregressive distributed lag (ARDL) model.
·      Similar to case 2, if series are not cointegrated based on Bounds test, we are expected to estimate only the short run. That is run only the ARDL model.
·   However, both the long run and short run models are valid if there is cointegration. That is run both ARDL and ECM models.

In addition, there are formal tests that can be carried out to see if despite the behaviour of the series, there can still be a linear combination or long run relationship or equilibrium among the series. The existence of the linear combination is what is known as cointegration. Thus, the regression with I(1) series can either be spurious or cointegrated. The basic single equation cointegration tests are Johansen, Engle-Granger and Bounds cointegration tests. These will be discussed in detail in subsequent tutorials.

In conclusion, I have discussed what is meant by nonstationary series, how can a series with a unit root be detected, and how can such series be made useful for empirical research? You are encouraged to use your data or the sample datasets uploaded to this bog to practise in order to get more hands-on knowledge.

[Watch video on how to perform stationarity test in Excel]

Please post your comments below….