TESTS WITH FEWER ASSUMPTIONS

In this context, and with the availability of computers and free statistics packages, it makes practical sense to use tests with fewer assumptions. An important class of these tests is the so-called “nonparametric tests.” These are described in detail elsewhere, but we’ll review some widely used examples to give the idea.

Wilcoxon Rank-Sum Test, Also Known As the Mann- Whitney U Test (or Simply the WMW Test)

The WMW test avoids “parameters” (assumptions about specific models or distributions) by analyzing only the relative ranks of the data, rather than the numerical values. This is also a test that compares two lists of observations to try to determine if one list tends to have larger numbers than the other. To formulate the rank-based test, the data from the two lists are combined, and ranks are calculated based on the whole set. Then the ranks from the two lists are added up separately and compared to the expected sum of ranks for a list of numbers that size. This expected sum can be calculated and is just the number of observations times the average rank. A little more formally,

• The observations (or data) are X1, X2, ..., Xn and Y1, Y2, ..., Ym, which we will write as X and Y. [1]

ranks in the data. This means that you never saw the same number twice. In practice, you’ll be using a statistics package to calculate these tests, so just make sure that the software is handling the tied ranks properly.

Under the null hypothesis (that all the observations are drawn from a single distribution), these test statistics turn out to be (approximately) Gaussian distributed, with mean and standard deviation (0, 1). The P-value for the WMW test is the one associated with the U-statistic that is more extreme. Applying the WMW test to the examples mentioned earlier, we get P = 0.00008 for the Cdc6 data for Stem Cells, and P = 0.00007 for the CD4 data.

  • [1] The ranks of the observations are R1, R2, ., Rn+m, where each of theobservations has been assigned a rank in the entire dataset. • There are two test statistics:UX = ^^^ R -n((n+m + 1)/2)j/a^based fi=m Л on the sum of the ranks of X, and UY =1 Ri-m((n + m+1)/2) l/aU based on the sum of the ranks of Y. In the formulas for the test statistic, the “average rank” shows up as (n +m + 1)/2, so the expected sum of ranks is just the average times the numberof observations (n or m). You might recognize that these test statistics havethe following form: observed average minus expected average divided bythe standard deviation. This type of standardized difference is often calleda “Z-score.” Amazingly, the formula for the standard deviation of the ranksalso has a reasonably simple form, sU =yj(mn/12) (m + n +1). However,it’s important to note that this formula assumes that there are no tied
 
Source
< Prev   CONTENTS   Source   Next >