This is the key to building scales. The idea is to find out which, among the many items you’re testing, need to be kept and which should be thrown away. The set of items that you keep should tap a single social or psychological dimension. In other words, the scale should be unidimensional.

In the next few pages, I’m going to walk through the logic of building scales that are unidimensional. Read these pages very carefully. At the end of this section, I’ll advocate using factor analysis to do the item analysis quickly, easily, and reliably. No fair, though, using factor analysis for scale construction until you understand the logic of scale construction itself.

There are three steps to doing an item analysis and finding a subset of items that constitute a unidimensional scale: (1) scoring the items, (2a) taking the interitem correlation and (2b) Cronbach’s alpha, and (3) taking the item-total correlation.

1. Scoring the Items

The first thing to do is make sure that all the items are properly scored. Assume that we’re trying to find items for a scale that measures the strength of support for training in research methods among anthropology students. Here are two potential scale items:

Training in statistics should be required for all undergraduate students of anthropology.

You can let the big and small numbers stand for any direction you want, but you must be consistent. Suppose we let the bigger numbers (4 and 5) represent support for training in statistics and let the smaller numbers (1 and 2) represent lack of support for that concept. Those who circle “strongly agree’’ on the first item get a 5 for that item. Those who circle ‘‘strongly agree’’ on the second item get scored as 1.

2a. Taking the Interitem Correlation

Next, test to see which items contribute to measuring the construct you’re trying to get at, and which don’t. This involves two calculations: the intercorrelation of the items and the correlation of the item scores with the total scores for each informant. Table 11.3 shows the scores for three people on three items, where the items are scored from 1 to 5.

Table 11.3 The Scores for Three People on Three Likert Scale Items

Item

Person

1

2

3

1

1

3

5

2

5

2

2

3

4

1

3

Table 11.4 The Data from the Three Pairs of Items in Table 11.3

Pair 1

Diff

Pair 2

Diff

Pair 3

Diff

1

3

2

1

5

4

3

5

2

5

2

3

5

2

3

2

2

0

4

1

3

4

3

1

1

3

2

~%_{d} (Sum of the diffs.)

8

8

4

/ Maxd

0.67

0.67

0.33

1 - (%_{d} /Max„)

0.33

0.33

0.67

To find the interitem correlation, we would look at all pairs of columns. There are three possible pairs of columns for a three-item matrix. These are shown in table 11.4.

A simple measure of how much these pairs of numbers are alike or unalike involves, first, adding up their actual differences, 2_{d}, and then dividing this by the total possible differences, MAX_{d}.

In the first pair, the actual difference between 1 and 3 is 2; the difference between 5 and 2 is 3; the difference between 4 and 1 is 3. The sum of the differences is 2_{d} = 2 + 3 + 3 = 8.

For each item, there could be as much as 4 points difference—in Pair 1, someone could have answered 1 to item 1 and 5 to item 2, for example. So for three items, the total possible difference, MAX_{d}, would be 4 X 3 = 12. The actual difference is 8 out of a possible 12 points, so items 1 and 2 are 8/12 = 0.67 different, which means that these two items are 1 — 2_{d} / MAX_{d} = 0.33 alike. Items 1 and 3 are also 0.33 alike, and items 2 and 3 are 0.67 alike.

Items that measure the same underlying construct should be related to one another. If I answer ‘‘strongly agree’’ to the statement ‘‘Training in statistics should be required for all undergraduate students of anthropology,” then (if I’m consistent in my attitude and if the items that tap my attitude are properly worded) I should strongly disagree with the statement that “anthropology undergraduates don’t need training in statistics.’’ If everyone who answers ‘‘strongly agree’’ to the first statement answers ‘‘strongly disagree’’ to the second, then the items are perfectly correlated.

2b. Cronbach's Alpha

Cronbach’s alpha is a statistical test of how well the items in a scale are correlated with one another. One of the methods for testing the unidimensionality of a scale is called the split-half reliability test. If a scale of, say, 10 items, were unidimensional, all the items would be measuring parts of the same underlying concept. In that case, any five items should produce scores that are more or less like the scores of any other five items. This is shown in table 11.5.

Table 11.5 The Schematic for the Split-Half Reliability Test