Properties of the fitted OLS line
Its properties are the following: first, the line passes through the sample means of Y and X; second, the mean value of estimated Y, Y, equals the mean value of the actual Y, Y;
Figure 14.4 Fitted Ordinary Least Squares (OLS) line third, the mean value of the residuals, e is zero; fourth, the residuals e are uncorrelated with the predicted Y., i.e. EYj e. = 0; fifth, the residuals e are uncorrelated with X. i.e. Ее X = 0.
t I
The problem of statistical inference
Suppose the following sample regression line y. = b, + b, X. is fitted as an estimate of the population line Y. = (3, 4 (3, X 4 e.. It may be of interest to test whether, (3„ say, takes a particular value, such as (3, = 1. Hypothesis testing procedures are needed for this. In Hypothesis testing, the following steps are carried:
(1) Specify the Null (H_{M}): e.g. /3, = 0
and the Alternative (H_{(}): /3, ^ 01 or /3, > 01 or /3, < 01
(2) Specify the level of significance a (and the level of confidence, 1ot), usually 5%,
or 1°% _{b p}
(3) Choose a test statistic e.g. z =  ~ N(0,1)
se(b_{2})
 (4) Perform calculations to find z"
 (5) Decision (compare z" with z""). With respect to the latter, for a z test statistic,
z = ——— (that is, when the population variance, a^{2} is known), testing H_{0}: /3 = se(b_{2})
/3., do not reject the Null:
for a two tail test (H.: 3, ^ 00, when — z""_{2} < z^{,lfc} < 4 z'”'_{n }for a right tail test (H,: 3, < /3”), when z^{obs} < + z‘f for a left tail test (H.: 3, > 01 ), when z^{obs} > — z"'
Z^{x}?
In testing hypothesis about p_{v} we know from the distribution of b_{t}, f>, ~ N(3,, a^{1} ^ y , ),
b 3 ^{[1]}
and the ztransformation yields the test statistic z = ———— N(0,1), where, se(b.) = _ se(b_{t})
Г
G~
Similarly, since the distribution of b, is, b, ~ N(3,. ), the appropriate test statistic
b, — 0 a^{2}
is z = ~ ' ~ N(0,1), where, se(b,) = ^ ,
se(b_{2}) 
The assumptions underlying the linear regression model are summarised in the following equation:
That is:
 • X. are exogenous or predetermined, or equivalently the X are orthogonal (independent) of the e.. That is, C(X_{(}, e) = E(X_{i} e) = 0.
 • Homoskedasticity. That is, the conditional variance of Y, or equivalently of the errorterm, is constant (and equal to a^{2}). Mathematically, E(c“) = n~.
 • NoAutocorrelation or Independence of Y, or equivalently of e, which is a result of random sampling. Mathematically, E(Ej £^ = 0.
 • Stability of the parameters of interest f3_{{} and /3, (also of O'^{2}, i.e. homoskedasticity') over the period of estimation.
 • The number of observations must be at least equal to the number of estimated coefficients for estimation.
 • Nonzero variation in the independent variable is crucial to enable estimation of the coefficients and their standard errors.
 • Normality of Y or equivalently of £.
It is important that the previous assumptions are true if correct inferences are to be made
(that is, to be able to place any confidence in our results) from an estimated regression
model. Tests exist which enable one to check the validity of these assumptions.
Goodness of fit: R2 – The coefficient of determination
Consider the regression model in mean deviation form; y_{} = b,A'_{i}+e_{i}. The sum of squared residuals Eef could provide a measure of fit (the Ее^{2} = E(Y — Y_{(})^{2} is, the bet
i
ter is the fit; the closer Y, is to Y.). However, these are affected by the scaling of variables.
Figure 14.5 Fitted OLS line and errors
Note: Variation for i observation, Y, is Y — Y However, when calculating total variation, over all observations, i.e. £(Y — Y), then £(Y_{(} — Y) = 0. Thus, use £(Y, — Y)' as a measure of variation. From this, for each observation: yf — yf + ^{e}f + 2 y_{t} e_{i}
Variation over all observations is: £ yf = £ yf + £ ef 4 2 £ y_{t} e_{t}.
Since T. y_{i}e_{i} = b_{2} £ x_{t} c = 0 , £ yf = £ yf + £ ef = bf £ xf + £ ef
Total Sum of Squares = Explained Sum ofSq. + Residual Sum ofSq. i.e., TSS = ESS + RSS
Alternatively: Goodness of fit may be thought of as the variation in y. around their mean value explained by the regression inodel (that is, by the variation in x). Ideally, we would like all the variation in y_{(}. (= Y_{} — Y) to be explained by the fitted values, y_{(}; that is, for the actual y. to be on the line. Thus, for each observation:
Define the coefficient of determination R^{2} as the proportion of the variation in Y explained by the regression line.
_ . „,. , ESS E у^{2} l)^{2} Ел^{2} Ec^{2}
That is, R is: R =  = —Ц ——L. = 1—
TSS E y, Ey" Ey"
R^{1} may also be calculated as: R^{2} = —
Ey Ey
Example: Performing a univariate regression analysis in Microsoft Excel is relatively simple: Arrange the data for the dependent (Y) and independent (X) variables in two columns; then select the tab “Data”, Data analysis, Regression and specify Y and X. In order to illustrate the regression model with shipping market data, we regress monthly growth rates of Capesize fiveyear secondhand vessel prices (Y) on monthly growth rates of Capesize voyage earnings (X), over the period July 2009 to May 2020, yielding 130 monthly return observations in total for each timeseries. Choosing the dependent and independent variables is typically based on economic reasoning. Specifically, as discussed in Chapters 1 and 2 of this book, vessel prices are expected to be linked with the income generated from the operation of the vessel (earnings). In this case the relationship between the monthly percentage changes rather than the levels of these variables, per the theory, are examined.
Table 14.20 presents the output of Microsoft Excel for this regression model, while Figure 14.6 shows the OLS fitted line. As observed in the table, the Rsquare (R^{2}) statistic is equal to 0.024947 (or 2.50%), which means that the fluctuations of the independent
Table 14.20 Excel regression output
Regression statistics  
Multiple R 
0.157947 

R Square 
0.024947 

Adjusted R Square 
0.01733 

Standard Error 
0.050093 

Observations 
130 

Coefficients 
Standard error 
t Slat 
Pvaiue 
Lower 95% 
Upper 95% 
Lower 95.0% 
Upper 95.0% 

Intercept 
0.00569 
0.004397 
1.29332 
0.198231 
0.01439 
0.003014 
0.01439 
0.003014 

Cape_Spot 
0.013369 
0.007387 
1.809685 
0.072691 
0.00125 
0.027986 
0.00125 
0.027986 
Figure 14.6 OLS line fit for Capesize fiveyear vessel prices (Cape_5yr_Price) on Capesize voyage earnings (Cape_Spot)
variable (the monthly return of Capesize voyage earnings) were able to explain only 2.50% of the fluctuations observed in the dependent variable (the monthly returns of Capesize fiveyear secondhand price). This is also evident in the figure, which shows a low fit of the regression line, in the sense that the data are widely scattered around the fitted line.
The estimated coefficient of the constant term (intercept with the vertical axis) equals —0.00569 while that of the slope equals 0.013369. The tstatistic of the slope is equal to 1.809685 and the associated pvalue is equal to 0.072691. Therefore, at the 10% significance level (a), the null hypothesis of the estimated coefficient (3,) to be equal to zero is rejected, since 0.072691 < 0.10. In other words, the coefficient of the variable of interest (slope) is statistically significant at the 10% significance level. Note that at the 5% significance level, which is typically used in empirical analyses, the 3, coefficient is not statistically significant as 0.072691 > 0.05. However, the coefficient of the intercept is not statistically significant at any reasonable level of statistical significance as the pvalue is 0.198231, far larger than the 0.10 and 0.05 significance levels.
Extension of results to multivariate regression
In practice, a variable of interest is affected by more than one other variable. For example, the demand for electricity does not depend only on the price of electricity. It also depends on the price of gas (since gas is a substitute for electricity in consumption), the consumers’ income (for normal goods demand increases as income rises), and other factors such as the weather (demand is higher in cold weather), the time of the day (lunch time demand is higher than after midnight), etc.
In order to take account of such situations, the results derived in the Sections 14.3.8.1 up to 14.3.8.4 for the bivariate regression model need to be extended to a multivariate framework. In what follows, the emphasis is in understanding how to read and interpret the results of multivariate regressions, rather than derive the formulas that would calculate coefficients, their standard errors and other related problems. This will help understand and interpret regression results presented in published work. Multivariate regression results should be thought of in the form of an equation like:
where, Y is the dependent variable. It is the variable that we think can be explained in terms of all the Xs. The latter are the explanatory or independent variables. In the previous equation there are k of them.
The bs are the estimated coefficients of the regression line, b_{0} is the “constant term” (it is the expected average value of Y if all the Xs are zero). Each of the other bs are referred to by naming the explanatory variable they multiply. Thus, b, is “the coefficient ofX”, b, is “the coefficient of X” and so on. Each coefficient is like a slope. It measures the effect of the explanatory variable it multiplies on the dependent variable, other things being equal (ceteris paribus). They are thus the partial derivatives of Y with respect to each of the Xs. Estimating a regression line involves feeding the data observations (values for Y and the Xs) and getting back values for all the bs. The formulas to calculate the values of the coefficients and their standard errors are much more complicated than in the simple bivariate (two variable) regression case. This is because in these formulas it is recognised that the explanatory variables may be related between them.
 [1] Correct model specification of the conditional mean of Y (linearity as in the previous equation). As a consequence, E(s) = 0.