# Scales of Measurement

The analysis of data is dependent on the scale of measurement that is employed in the study design and data collection. At the most basic level, the form of measurement is nominal or categorical. A good example is the numbers that professional athletes such as football players wear on their uniforms. A higher or lower number has no bearing on the skill level or performance, but rather merely identifies a certain player. Therefore, categorical data do not have to confer rank although they sometimes do in the illustration that follows. As an example, one might arbitrarily classify those persons over 5 feet 6 inches or more as “tall” and those less than 5 feet 5 inches or less as “short.” These same persons might be classified by gender as male or female. These are categorical variables and if one wanted to examine males and females by whether they were classified as tall or short, one would conduct a 2 X 2 chi-square analysis that would test the null hypotheses that there are no differences in the distribution of tall versus short persons among the two genders. *(Note:* In any 2 X 2 chi-square analysis, the Yate’s correction for discontinuity would be applied whereas such a correction is not required for any other type of chi-square analyses.) If one wanted to determine if a particular set of variables (e.g., age, gender, height of father) would predict a dichotomous categorical outcome such as those who are classified as tall or short, one could employ techniques such as DFA or logistic regression (which does not require a multivariate normality assumption as is seen with DFA).

Of course, the problem with arbitrarily categorizing participants into dichotomous categories is the loss of information. If we categorized persons in terms of their actual height without classifying them as tall or short, we would have a scale that is interval in that there is a range of higher and lower scores and equal intervals between measures, which would be scaled by inches or feet. One could argue that height might even be categorized as a ratio scale since there is hypothetically an absolute zero point such as on a thermometer. However, since no human being is without height, this would be a specious argument and an interval versus a ratio scale would actually have no impact on choice of analyses. A simple correlation between height and weight could be conducted using the Pearson product- moment correlation coefficient, or height might be predicted by a number of IVs (linear regression). If we wanted to determine how those with high SES (socioeconomic status), medium SES, or low SES differed in terms of standardized test scores, one could conduct a one-way analysis of variance (ANOVA) with standardized test scores serving as an outcome measure. Following a statistically significant *F* test (typically *p <* .05), comparisons between means could be conducted using a post hoc test such as the Student-Neuman-Keuls procedure or Tukey’s honestly significant difference (Tukey’s HSD) test. This is a compromise between liberal post hoc tests such as independent *t* tests and conservative procedures such as the Bonferroni correction.

It should be noted that many scales that seem like they are interval do not have equal intervals between measurements. For example, in a horse race, the difference in length between the first place horse and the second place horse may be quite different than that of the third and fourth place horses or the fifth place and the sixth place horses. Unequal intervals between numbers on a scale constitute an ordinal scale, which must be analyzed using statistics that examine the differences between ranks. These nonparametric tests in the case of correlation coefficients may be a Spearman rank-order correlation test rather than a Pearson product-moment test for interval or ratio data. Instead of an independent *t* test for interval- or ratio-level data, investigators might consider a Mann-Whitney U. Instead of a classic ANOVA with an *F* test, ordinal data for three or more groups may be analyzed using a nonparametric distribution-free Kruskal-Wallis Test.