# Resampling Techniques

This chapter addresses general procedures for performing statistical inference on a parameter using an estimator with minimal assumptions about its distribution. In contrast to fully nonparametric approaches used earlier, these techniques use information contained in the sample to make inferences about the sampling distribution of the estimator. Two approaches, the bootstrap and the jackknife, are discussed in this chapter.

Suppose that independent and identically distributed vector observations Zi,..., Zn are drawn from a distribution with a continuous cumulative distribution function F. and suppose that inference on a parameter 0 is required, and an estimator 9(Z 1,.. •, Zn) is used.

The parameter в can be thought of as a function of the distribution function; for example, the population expectation 9(F) = f z dF(z), and the population median solves F(0) = 1/2. Furthermore, the estimator 9 is a function of the empirical distribution function F, and often is the same functional that gives the parameter: 9 = 9(F). For example, an estimator of a population expectation is the sample mean, which is the expectation of the population formed by placing equal weights on each sample point. As a second example, an estimator of a population median is the sample median, which is the median of population formed by placing equal weights on each sample point.

The distribution of the estimator, and particularly how this distribution depends on the quantity to be estimated, is also required for statistical inference. In some cases, the structure of the model for a data set implies the distribution of the estimator; for example, if observations are stochastically independent indicators of whether an event occurs, the resulting distribution is known to be derivable from the distribution of Bernoulli trials. In other cases, recourse is made to a central limit theorem, to produce traditional Gaussian- theory inference. The current chapter concerns using the the observed data set to provide this distributional information.

## The Bootstrap Idea

The bootstrap is a suite of tools for inference on a model parameter, while using the data to give information that might otherwise come from assurnptions about the parametric shape of its distribution. Let G(9: F) represent the desired, but unobservable, distribution of 9 computed from independent random vectors Z,..., Zn, with each random vector Z, having a distribution function F. Assume that this distribution depends on F only via the parameter of interest, so that one might write G(9: F) = H(9; 9) for some function H.

Consider constructing confidence intervals using the argument of §1.2.2.1. Using (1.18) with T = 9, a l — a confidence interval for 9 satisfies H(9; 9i) = 1 — q/2 and 11(9:9ц) = a/2. The function H, as a function of 9, is unknown, and will be estimated from the data. Since observed data all arise from a distribution governed by a single value of 9, the dependence of H(9:9) on 9 cannot be estimated. An assumption is necessary in order to produce a confidence interval. Assume that

9 — 9 has a distribution that, approximately, does not depend on 9. (10.1)

### The Bootstrap Sampling Scheme

Estimate H(9,9) by IF (9). the distribution of the estimator evaluated on the set of n random vectors, (Z,..., Z*), where Z* are independent, and selected from {Z,..., Zn} with probability 1 jn for each. That is, if 9b,i are the values of 9 evaluated at each of the nu samples, then Here again, the function I of a logical argument is 1 if the argument is true, and 0 if it is false. Generally, the function 9 is symmetric in its arguments (for example, 9(z,, Z2,... ,zn) = 9(z2, Z,..., zn), and similarly for other permutations of the arguments), and so the distribution of 9(Y,..., Yn) is a discrete distribution supported on fewer values than n”, although any shortcuts that exploit this symmetry still leave the exact enumeration of distribution of 9 under the resampling distribution intractable.

Because this exhaustive approach consumes excessive resources, one almost always proceeds via random sampling. Choose a number of random samples В to draw. One draws new random samples from the population represented by the original sample, with replacement. This sampling with replacement distinguishes the bootstrap from previous permutation techniques. For random sample i, evaluate the estimator on this sample, and call it 9b.i- The collection of such values is called the bootstrap sample. Let Call this distribution the resampling distribution.

When approximating H by IP, two sources of errors arise: the error in approximating H by IP, governed by sample size, and the error in approximating IP' by H*, which is governed by B. Generally speaking, moderate values for В (for example, 999, or 9999) are sufficient to make the second source of error ignorable, in the presence of the first source of data.

Techniques below will need quantiles of IP, which are determined by ordered values of the bootstrap samples. Express the ordered values of 0ц , by &B,(i) • Order statistics from the bootstrap sample are used to estimate quantiles of the bootstrap distribution. The most naive approach uses 6?в(г) to represent quantile i/B of the bootstrap distribution. By this logic, \$b(i) estimates the 1 quantile, and #b(B) represents the 1 quantile; that is, #b(B) approximates a value with all of the true sampling distribution for в at or below it. Conceptually, the estimation problem ought to be symmetric if the order of bootstrap observations is swapped, but these naive quantiles are not. The upper quantile is wrong, since the population that Оц might be intended to represent might not exist on a bounded interval. One may make this quantile definition symmetric by taking ^B(i) to represent quantile ij(B +1) of this distribution.

Bootstrap techniques below will use the analogy Since H* is approximately centered about 0. bootstrap techniques will require that в is defined so that The statements of conditions (10.1) and (10.3) are purposely vague; Abramovitch and Singh (1985) give an early set of specific conditions, and Hall (1992) presents a manuscript-length set of tools for assessing the appropriateness of bootstrapping in various contexts. Most results guaranteeing bootstrap accuracy rely on the existence of and Edgeworth approximation to 0(1/fn): that is, they require the existence of constants «2 and кл such that (6.5) holds for the distribution of в, with the approximate expectation k'i equal to в, the terms involving k4 and removed, and with error bounded by a constant divided by n. Such results will not apply to bootstrap inference using the sample mean for the Cauchy distribution, for example, since the variance for the Cauchy distribution is not finite. The bootstrap sometimes performs poorly when (6.5) fails to hold (Hall, 1988).

Alternatively, if one is willing to assume that F takes a parametric form, one might sample from the distribution F(-,0), and use these as above to construct IP. This technique is called the parametric bootstrap (Efron and Tibshirani, 1993, §6.5).

## Univariate Bootstrap Techniques

Various strategies exist for using these samples in the most simple univariate contexts. Terminology below is consistent with the R package boot.

### The Normal Method

If one is willing to assume that, approximately, 9 has an approximately Gaussian distribution centered at 9 and with some unknown variance, one may use the bootstrap sample to estimate the standard error of an estimator. Consider the standard error estimate Using в в = бвл/B in place of 9 in (10.4) results in an estimate systematically too small. Use В — 1 instead of В in the denominator of <,”2, or 9 in place of 9, to respond to this undercover age. The better estimate is Then the standard deviation estimate £ is the sample standard deviat ion of the bootstrap samples (Efron, 1981), and a 1 — a confidence interval is 9±<;z-a/2 for the sample standard deviation of bootstrap samples (10.5).

### Basic Interval

Often one is unwilling to assume that the estimator is approximately Gaussian. Assume conditions (10.1) and (10.3). Use the distribution of 9вл~9 as a proxy for that of 9 — 0. A confidence interval for 9 may be constructed as by determining vL and vy to satisfy if this distribution were known.

Let ul and ujj be a/2 and 1 — a/2 quantiles of 9b,; respectively: Then where P, [•] is the probability function associated with H*. Using analogy (10.2), equate endpoints of (10.7) and (10.6), to estimate quantiles vtj and vl of 9 B.t — 9 by vl = ul — 9, and vu = иц — 9. Then a confidence interval for 9 is This is the basic bootstrap confidence interval (Davison and Hinkley, 1997, p. 29).

### The Percentile Method

Suppose that 9 has distribution symmetric about 9. In this case, treat 9 — vu and —0 + v/_ as equivalent approximations to interchange, and so one can use vu = в — ul, vl = в — uu. The confidence interval is now (Efron, 1981).

This method is referred to as the percentile method (Efron, 1981), or as the usual method (Shao and Tu, 1995, p. 132f). If the parameter 9 is transformed to a new parameter г), using a monotonic transformation, then the bootstrap samples transform in the same way, and so the percentile method holds if there exists a transformation that can transform to symmetry, regardless of whether one knows and can apply this transformation.

Example 10.2.1 Consider again the arsenic data of Example 2.3.2. We calculate a confidence interval for the median.

meds<-rep(NA,999)

attach(arsenic)

for(j in seq(length(meds)))

meds [j]<-median(sample(nails,length(nails),replace=TRUE))

gives the bootstrap samples. The sample function draws a random sample with replacement. Then

cat(ci<-quantile(meds,probs=c(.025,.975))," ")

gives the percentile confidence interval (0.119 0.310), and

cat(’ Residual Bootstrap for Median ’) cat(ci<-2*median(nails)-rev(ci)," ") detach(arsenic)

gives the basic or residual confidence interval (0.0fO.0.231). Recall that the estimate of the density for the nail arsenic values was plotted in Figиге 8.1. This distribution is markedly asymmetric, and so the percentile bootstrap is not reliable; use the residual bootstrap.

### BCa Method

This method was introduced by Efron (1987), who called it the BCa method. The BCa method extends his BC method, which he terms “bias- corrected”; Efron and Tibshirani (1993) refer to the method of this section as bias corrected and accelerated. Again, suppose one desires a confidence interval for в, with estimator в. As in §10.2.3, suppose в can be transformed to symmetry using a transformation ф (which need not be known). Without loss of generality, this symmetric distribution may be taken as Gaussian. Suppose further that ф(0) has a standard deviation that depends linearly on o(6). and that ф(0) has a bias that depends linearly on the standard deviation of ф(в). That is, assume that there exists a transformation o, and constants a and £ such that (ф(в)ф(в))/(1 + аф(в)) + £ is approximately standard Gaussian. That is, and Let в* be the value of в giving quantile 1 — a for в. Substituting в* for в into (10.8), and equating в with 0. gives Equating the bootstrap distribution function to this tail probability, the corresponding quantile is defined by (10.9). The quantity a is called the acceleration constant. The bias £ may be estimated by the difference between the estimate and the median of the bootstrap samples, and a may be estimated using the skewness of the bootstrap sample.

Example 10.2.2 Return again to the nail arsenic values of the previous example. We again generate a confidence interval for the median. The BCa method, and the previous two methods, may be obtained using the package boot.

library(boot)#gives boot and boot.ci.

#Define the function to be applied to data sets to get #the parameter to be bootstrapped, boot.ci(boot(arsenic\$nails,function(x,index) return(median(x[index])),9999))

to give

Level Normal Basic

957, ( 0.0158, 0.2733 ) ( 0.0400, 0.2310 )

Level Percentile BCa

957, ( 0.119, 0.310 ) ( 0.118, 0.277 )

In this case, the normal and percentile intervals are suspect, because of the asymmetry of the distribution. The more reliable interval is the bias corrected and accelerated interval.

Recall that an exact confidence interval may be constructed using

library(MultNonParam); exactquantileci(arsenicSnails)

to obtain the interval (0.118, 0.35f). Efron (1981) notes that this exact interval will generally agree closely with the percentile bootstrap approach.

One can use the bootstrap to generate intervals for more complicated statistics. The bootstrap techniques described so far, except for the percentile method, presume that parameter values over the entire real line are possible. One can account for this through transformation.

Example 10.2.3 A similar approach may be taken to a confidence interval for the standard deviation of nail arsenic values. In this case, first change to the log scale.

logscale<-function(x,index) return(log(sd(x[index]))) sdbootsamp<-boot(arsenicSnails,logscale,9999) sdoutput<-boot.ci(sdbootsamp)

Figure 10.1 shows the bootstrap samples; the plot produced by plot(density(sdbootsamplt))

shows a highly asymmetric distribution, and the BCa correction for asymmetry is strong. (As noted before, the actual bootstrap distribution is supported on a large but finite number of values, and is hence discrete and does not have a density; the plot is heuristic only.) The output from boot. ci contains some information not generally revealed using its default printing method. In particular, sdoutputlbca is a vector with five numeric components. The first of these is the confidence level. The fourth and fifth are the resulting confidence interval end points. The second and third give quantiles resulting from (10.9). The upper quantile is very close to the maximum value of 9999; boot.ci gives a warning, and serious application of BCa intervals would better be done using more bootstrap samples. Rerunning with

boot.ci(boot(arsenic\$nails,logscale,99999))\$bca

gives the BCa 0.95 interval (-1.647,-0.096) for the log of standard deviation of nail arsenic.

FIGURE 10.1: Boot Strap Samples for Log of Nail Arsenic Standard Deviation ### Summary So Far, and More Examples

Table 10.1 contains the observed coverages for nominal 0.90 confidence intervals for the medians of simulated data sets, for various distributions. Random samples of size 20 were drawn 1000 times from each distribution, and in each case, 9999 bootstrap replicates were constructed. For each random distribution, the interval was checked to see whether it contained the true population median.

TABLE 10.1: Observed coverage for nominal 0.90 Bootstrap intervals, 10 observations

 Normal Basic Percentile BCa Exponential 0.863 0.772 0.891 0.895 Uniform 0.833 0.754 0.901 0.900 Cauchy 0.917 0.835 0.875 0.862

The percentile bootstrap performed remarkably well for the exponential distribution, in light of the interval’s construction assuming symmetry. None showed significant degradation when data came from a very heavy-tailed Cauchy distribution.

One can apply many of these inferential techniques to the parametric bootstrap.

Example 10.2.4 Consider again the brain-volume data of Example 5.2.1. Treat the brain volume differences as having a Gaussian distribution. Function boot recognizes the parametric context through the sim="parametric" argument; one specifies the specific parametric assumption by providing a random number generator for bootstrap samples.

cat(" Parametric Bootstrap for median difference ") qmed<-function(x,indices) return(median(x[indices])) ran.diff<-function(x,ests)

return(ests+ests*rnorm(length(x))) bootoutc-boot(brainpairsldiff,qmed,R=999,sim="parametric", ran.gen=ran.diff,mle=c(mean(brainpairsldiff), sd(brainpairs\$diff))) boot.ci(bootout,type=c("basic","norm"))

The basic interval is (-30.056. 52.720). The normal interval is almost identical.