Peoples demand for money will also depend on what has happened in the last year or two, perhaps longer in some cases.
“Due to force of habit, people do not change their consumption habits immediately following a price decrease or an income increase, perhaps because the process of change involves some immediate disutility.”2
This theory can be applied to the project and there is now demand for at least one lagged variable, whether it is a lag of the dependent variable or one of the explanatory variables will be addressed later in this analysis, after having started with a basic model.
The Econometric Model
Using economic theory it is decided that the dependent variable will be ma-mp, which is a measurement of M1 money (which was addressed earlier), minus the “implicit price deflator for total final expenditure”. This is a measure of the “real” money, while if only M1 was used for the dependent variable, the “nominal” level of money would have been measured.
The interest rate, which has already been labelled a likely important factor in this analysis, will be measured using the RNET variable. The RLA variable was not used because it only measures the interest rate on deposits held for at least three months. It is interesting to note that the RLA and RNET values are identical up until the third quarter of 1984.
The hardest decision is whether to use the ca (consumers’ expenditure) or ia (real personal disposable income) variable in the model. After careful deliberation it is decided that the ia variable will be used, as it will address what “proportion” of income is dedicated to money, which seems a more interesting analysis that simply the level of expenditure. If there is time, however, the ca variable will be looked at later.
So finally analysis can start, with the basic model given below:
ma-pat = B1 + B2 RNETt + B3 iat + ut
As mentioned above the use of lagged variables will be important in our regression analysis. The most obvious variable to lag is going to be ma-mp, suggesting that the level of money demanded this year, will be affected by the amount of money demanded last year. The ia variable may also be lagged to see if disposable income last year, affects this year’s demand for money. (Note: when regressing theses lagged variables one will have to be aware of the possibility of correlation and multicollinearity.) This note neatly brings this project straight on to the next important topic.
Data Issues and the Hypotheses to be Tested
The data used is seasonally adjusted so there is no need to add dummy variable to represent different part of the year.
Dummy variables could be used to distinguish between other external variables such as the type of government in power at the time but this would be an unnecessary complication. Because this is a “log-linear” model, the coefficients will be a measure of elasticity. The regression will cover the time period, i.e. from 1963_1 to 1989_2 (Note: 1983_3/4 will not be included due to incomplete data)
Hypotheses
The variables will each be tested for individual significance and the overall strength of the model will also be measured. After the initial regression the model will be tested for problems such as autocorrelation, multicollinearity and misspecification bias. The model shall also be tested in the long run.
As mentioned before, if there is sufficient time the ca variable will be looked at.
Estimation
The regression results from our original basic model are reproduced in appendix, (a). The signs of the two explanatory variables are as expected, i.e. the level of disposable income has a positive relationship with the demand for money while the interest rate has a negative one. The t-values obtained from the coefficients of RNET and IA reject the null hypothesis that they are individually insignificant at both the 5% and 1% level of significance. The model has a fairly strong R2 of 0.68154 as well which is promising.
Unfortunately the Durbin-Watson d statistic that we obtain, once run through the Durbin-Watson test is suggestive of positive autocorrelation. The Durbin-Watson test for autocorrelation does, however, have it limitations, some of which include the fact that there are zones of “indecision” and more importantly is not appropriate to use when a lagged dependent variable is included which will occur later when the basic model starts to evolve. The Breusch-Godfrey test will therefore be used to test for autocorrelation. The Breusch-Godfrey test also says that autocorrelation is present in the model at both the 5% and 1% level especially at the first order serial correlation. The Whites General Test also reveals heteroscedasticity. These results mean than the variables in the model are not BLUE.
In an attempt to counteract the adverse effects above the model will have to be changed. The variables in the basic model will remain but lagged values of the variables will be included, meaning that the new model will now be:
ma-pat = B1 + B2 RNETt + B3 iat + B4 ma-ptt-1 + B5 RNETt-1 + B6 iat-1 +
B7 ma-pat-2 + B8 RNETt-2 + B9 iat-2 + B10 ma-pat-3 + B11 RNETt-3 + B12 iat-3 + ut
The results of this regression can be found in the appendix under part (c). The new Chi squared figures indicate that we can accept the null hypothesis that there is no heteroscedasticity present at the 5% and 1% levels. We can also accept the null hypothesis that there is no longer autocorrelation at the same levels of significances as above. The R2 value has shot up to 0.994653 and by using the F-test is significant. Although we seem to have dealt with the worst of the autocorrelation and heteroscedasticy, the t-ratios now seem to be insignificant. A high R2 value and few significant t-ratios is a classic indicator of multicollinearity. Multicollinearity is undesirable as it leads to large standard errors of the estimators.
Remedial measures to combat multicollinearity.
The simplest model to attempt to control multicollinearity is to drop variables, we can do this confidently seeing as we have so many. The new model (now greatly cut down) is:
ma-pat = B1 + B2 RNETt + B3 iat + B4 ma-ptt-1
The regression results for this model can be found in appendix (d). The results for this model look proimising. We have a high R2 value, the t-values (apart from the intercept) seem to all be significant, suggesting we have lost the problem of multicollinearity. We can accept the null hypothesis that there is no autocorreation present at the 1% level and it is tantilisingly close to being accepted at the 5% level. We can also accept the null hypothesis that there is no heteroscedasticity at both the 1% and 5% levels of significance. At last, we seem to have found a valid final model.
Appendix:
APPENDIX (a)
EQ( 1) Modelling ma-pa by OLS (using dataset3.in7)
The estimation sample is: 1963 (1) to 1989 (2)
Coefficient Std.Error t-value t-prob Part.R^2
Constant 6.67414 0.6125 10.9 0.000 0.5355
RNET -3.83729 0.2829 -13.6 0.000 0.6412
ia 0.421047 0.05679 7.41 0.000 0.3480
sigma 0.108626 RSS 1.21535824
R^2 0.68154 F(2,103) = 110.2 [0.000]**
log-likelihood 86.4177 DW 0.223
no. of observations 106 no. of parameters 3
mean(ma-pa) 10.8979 var(ma-pa) 0.0360034
APPENDIX (b)
Error autocorrelation coefficients in auxiliary regression:
Lag Coefficient Std.Error
1 1.0444 0.1013
2 -0.048955 0.1458
3 -0.16074 0.1448
4 0.1128 0.1466
5 -0.076463 0.1065
RSS = 0.247076 sigma = 0.00252118
Testing for error autocorrelation from lags 1 to 5
Chi^2(5) = 84.451 [0.0000]** and F-form F(5,98) = 76.812 [0.0000]**
Testing for heteroscedasticity using squares
Chi^2(4) = 28.035 [0.0000]** and F-form F(4,98) = 8.8098 [0.0000]**
APPENDIX (c)
EQ( 3) Modelling ma-pa by OLS (using dataset3.in7)
The estimation sample is: 1963 (4) to 1989 (2)
Coefficient Std.Error t-value t-prob Part.R^2
ma-pa_1 0.660335 0.1082 6.10 0.000 0.2905
ma-pa_2 0.144261 0.1289 1.12 0.266 0.0136
ma-pa_3 0.0782691 0.09929 0.788 0.433 0.0068
Constant 0.139320 0.1524 0.914 0.363 0.0091
RNET -0.766103 0.1182 -6.48 0.000 0.3157
ia 0.101770 0.08265 1.23 0.221 0.0164
RNET_1 -0.101028 0.1963 -0.515 0.608 0.0029
ia_1 0.107584 0.09780 1.10 0.274 0.0131
RNET_2 -0.126755 0.1975 -0.642 0.523 0.0045
ia_2 -0.0637855 0.09550 -0.668 0.506 0.0049
RNET_3 0.0297317 0.1319 0.225 0.822 0.0006
ia_3 -0.0322372 0.07834 -0.411 0.682 0.0019
sigma 0.0149724 RSS 0.0203998214
R^2 0.994653 F(11,91) = 1539 [0.000]**
log-likelihood 292.988 DW 2.01
no. of observations 103 no. of parameters 12
mean(ma-pa) 10.8974 var(ma-pa) 0.0370415
Error autocorrelation coefficients in auxiliary regression:
Lag Coefficient Std.Error
1 0.34614 0.9412
2 0.48134 0.6174
3 -0.23942 0.2097
4 0.16274 0.1137
5 0.21758 0.1099
RSS = 0.0188337 sigma = 0.000218997
Testing for error autocorrelation from lags 1 to 5
Chi^2(5) = 7.9075 [0.1614] and F-form F(5,86) = 1.4303 [0.2216]
Testing for heteroscedasticity using squares
Chi^2(22)= 20.089 [0.5774] and F-form F(22,68) = 0.74893 [0.7728]
APPENDIX (d)
EQ( 3) Modelling ma-pa by OLS (using dataset3.in7)
The estimation sample is: 1963 (4) to 1989 (2)
Coefficient Std.Error t-value t-prob Part.R^2
ma-pa_1 0.909460 0.01242 73.2 0.000 0.9819
Constant 0.120873 0.1260 0.960 0.340 0.0092
RNET -0.743990 0.05791 -12.8 0.000 0.6251
ia 0.0864331 0.009326 9.27 0.000 0.4645
sigma 0.0149187 RSS 0.0220341891
R^2 0.994225 F(3,99) = 5681 [0.000]**
log-likelihood 289.019 DW 2.46
no. of observations 103 no. of parameters 4
mean(ma-pa) 10.8974 var(ma-pa) 0.0370415
Error autocorrelation coefficients in auxiliary regression:
Lag Coefficient Std.Error
1 -0.25767 0.1024
2 -0.053333 0.1055
3 -0.079404 0.1061
4 0.11792 0.1063
5 0.20319 0.1029
RSS = 0.0196544 sigma = 0.000209089
Testing for error autocorrelation from lags 1 to 5
Chi^2(5) = 11.125 [0.0490]* and F-form F(5,94) = 2.2764 [0.0531]
Testing for heteroscedasticity using squares
Chi^2(6) = 7.3739 [0.2876] and F-form F(6,92) = 1.1824 [0.3226]