There are many ways to split a group of items into halves and each split will give you a different set of totals. Here’s the formula for selecting n elements from a set of N elements, paying no attention to the ordering of the elements:

If you have 10 respondents, then there are 101/51(10-5)! =252 ways to split them into halves of five each. For 20 items, there are 184,756 possible splits of 10 each. Cronbach’s coefficient alpha provides a way to get the average of all these split-half calculations directly. The formula for Cronbach’s alpha is:

where p (the Greek letter rho) is the average interitem correlation—that is, the average correlation among all pairs of items being tested.

By convention, a good set of scale items should have a Cronbach’s alpha of 0.80 or higher. Be warned, though, that if you have a long list of scale items, the chances are good of getting a high alpha coefficient. An interitem correlation of just .29 produces an alpha of .80 in a set of 10 items (DeVellis 2003:98).

Eventually, you want an alpha coefficient of 0.80 or higher for a short list of items, all of which hang together and measure the same thing. Cronbach’s alpha will tell you if your scale hangs together, but it won’t tell you which items to throw away and which to keep. To do that, you need to identify the items that do not discriminate between people who score high and people who score low on the total set of items.

3. Finding the Item-Total Correlation

First, find the total score for each person. Add up each respondent’s scores across all the items. Table 11.6 shows what it would look like if you tested 50 items on 200 people (each x is a score for one person on one item).

Table 11.6 Finding the Item-Total Correlation

Person

Item 1

Item 2

Item 3

Item 50

1

x

x

x

x

2

x

x

x

x

3

x

x

x

x

200

x

x

x

x

For 50 items, scored from 1 to 5, each person could get a score as low as 50 (by getting a score of 1 on each item) or as high as 250 (by getting a score of 5 on each item). In practice, of course, each person in a survey will get a total score somewhere in between.

A rough and ready way to find the items that discriminate well among respondents is to divide the respondents into two groups, the 25% with the highest total scores and the 25% with the lowest total scores. Look for the items that the two groups have in common. Those items are not discriminating among informants with regard to the concept being tested. Items that fail, for example, to discriminate between people who strongly favor training in methods (the top 25%) and people who don’t (the bottom 25%) are not good items for scaling people in this construct. Throw those items out.

There is a more formal way to find the items that discriminate well among respondents and the items that don’t. This is the item-total correlation. Table 11.7 shows the data you need for this:

Table 11.7 The Data for the Interim Correlation

Person

Total score

Item 1

Item 2

Item 3

Item 50

1

x

x

x

x

x

2

x

x

x

x

x

3

x

x

x

x

x

N

x

x

x

x

x

With 50 items, the total score gives you an idea of where each person stands on the concept you’re trying to measure. If the interitem correlation were perfect, then every item would be contributing equally to our understanding of where each respondent stands. Naturally, some items do better than others. The ones that don’t contribute a lot will correlate poorly with the total score for each person. Keep the items that have the highest correlation with the total scores.

You can use any statistical analysis package to find the interitem correlations, Cron- bach’s alpha, and the item-total correlations for a set of preliminary scale items. Your goal is to get rid of items that detract from a high interitem correlation and to keep the alpha coefficient above 0.80. (For an excellent step-by-step explanation of item analysis, see Spector 1992:43-46.)