LOGISTIC REGRESSION

In Chapter 7, I gave the example of logistic regression: predicting whether something (Y) was 1 or 0 based on a series of observations (X). Thus, logistic regression can be thought of a classification method, and regularized multinomial regression is a respectable modern machine learning approach to multi-class classification. The estimation of the logistic regression parameters, b, is the “training” stage for this classification method. We assume that we are in a multivariate setting, so that we have a vector b with one component for each dimension, and we can proceed with estimation using one of the approaches we saw in the chapters on regression. To predict Yn+1 (or do the classification) based on the features, Xn+1, for a new observation we simply follow the assumption of logistic regression, and calculate

Since there are only two possibilities (1 or 0), if this is greater than У2, we can assign observation n + 1 to the positive class by the MAP rule. The classification boundary is therefore

which works out to bX = 0 (see Exercises). More generally, we can choose a threshold, t, and set the classification boundary to be bX = t, above which we will assign observation n + 1 to the positive class. This formula is just the equation for a line if there are two dimensions, i.e., Xt = (X;1, Xi2), (see Exercises), and a plane (or “hyperplane”), if there are more than two dimensions.

 
Source
< Prev   CONTENTS   Source   Next >