Partial correlation tells us how much a third (or fourth . . .) variable contributes to the relation between two variables. Multiple regression puts all the information about a series of variables together into a single equation that takes account of the interrelationships among independent variables. The result of multiple regression is a statistic called multiple-R, which is the combined correlation of a set of independent variables with the dependent variable, taking into account the fact that each of the independent variables might be correlated with each of the other independent variables.

What’s really interesting is R^{2}. Recall from chapter 21 that r^{2}—the square of the Pearson product moment correlation coefficient—is the amount of variance in the dependent variable accounted for by the independent variable in a simple regression. R^{2}, or multiple- R squared, is the amount of variance in the dependent variable accounted for by two or more independent variables simultaneously.

Now, if the predictors of a dependent variable were all uncorrelated with each other, we could just add together the pieces of the variance in the dependent variable accounted for by each of the independent variables. That is, it would be nice if R^{2} = r_{1}^{2} + r_{2}^{2} + r_{3}^{2} + ...

It’s a real nuisance, but independent variables are correlated with one another. This interdependence among independent variables is called multicollinearity, which we’ll get to later. What we need is a method for figuring out how much variance in a dependent variable is accounted for by a series of independent variables after taking into account all of the overlap in variances accounted for across the independent variables. That’s what multiple regression does.