The Fama equation

I will first estimate the Fama equation, i.e. the linear stochastic counterpart of the UIP model, since it has appeared in myriad empirical papers over the decades as the workhorse linear model of the relationship between interest rates and exchange rates. The exact specifications of this model are already presented in Table 6.1. Therefore, I am not going to repeat it.

I rather want to give information about the computer packages and software that I have used in estimating the models in Table 6.1.1 used R and Eviews for estimating the Fama equation. I also used R for estimating the Lasso, Ridge, and Elastic Net regressions. I used Eviews for the smooth transition models. I used Eviews for TAR models, as well. Eviews is a highly capable software with a very user-friendly graphical user interface, while R is a fiill-fledged vector-based progranuning language, which requires at least some basic programming skills. R is widely used among statisticians and data analysts for its superior data analysis capabilities, which continue to expand with the introduction of new packages. R is free, while only a lite (yet highly capable) version of Eviews can be used by scholars and students for free.

Last but not least, the pound sign (#) indicates the simple texts, i.e. lines starting with a pound sign are text lines and would not be run by the program. That is why all the lines starting with # in what follows are simply there for communication purposes. Using them, I have tried to explain to the reader the meaning of the lines in the code.

##############################################################

# R CODES FOR ESTIMATING THE FAMA EQUATION AND CALCULATING #

# THE MSPE AND MAE OF ITS PREDICTIONS #

##############################################################

# with the following 'read, csv' command, we upload a dataset from our computer to R

n = nrow(data) # number of rows in the dataset is fed into an object named 'n'

# data partitioning: we can partition data into training and

# testing sets in multiple ways belov/ are the codes for random

# sampling of the dataset as 90% of the data spared for training

# purposes and remaining 10% spared for testing. Before using

# the codes, do not forget to erase the sign in front of

# the command lines.

# ind = sample (2, nrow(data), replace = TRUE, prob = c (0. 90,0.10))

# data_train = data[ind==l,]

# data_test = data[ind==2,]

# or we can apply the simple Hold-out method for partitioning.

# 3elow are the codes retaining the first 90% of the observations

# for training and using the rest for testing

data_train = data[1:as.integer(0.90*nrow(data)),] data_test = data[(as.integer(0.90*nrow(data))+1):nrow(data), ]

# remember that we are estimating the fama equation for USD/JPY

# as an example the columns of our dataset are as follows (in

# order):

# 'X' 'X3mchgUSDAUD' 'X3mchgUSDJPY' 'US_JPYintdiff'

# 'US_AUDintdiff' 'stdevusdaud' 'stdevusdjpy'

# therefore we only need the 3rd column X3mchgUSDJPY,

# i.e. 3 monthly change in the USD/JPY exchange rate, and the 4th

# column US_JPYintdiff, i.e. the interest rate differential

# between the US and Japan

training_interest_differential = data_train[ ,4] training_exchange_rate = data_train[ ,3] testing_interest_differential = data_Cest[ ,4] testing_exchange_rate = data_Cest[ ,3]

# train the model using the training dataset

model.lm = 1m(training_exchange_rate~training_interest_ differential, data = data_train)

# make the out-of-sample prediction of the model using the testing data

pred = model. lmScoefficients [ 1 ] +model. lm\$coefficients [2 ] * data_test[, 3]

у = as.matrix(testing_exchange_rate) yhat = as.matrix(pred) errors = yhat-y errorssquared = errors*errors errorsabsolute = abs(errors)

# calculation of the Mean Squared Prediction Error and Mean Absolute Error of the model