Next: Robust Bootstrap Up: Approach Previous: The Empirical Approximation of

## Bootstrap

Since its introduction by Efron (1979), the bootstrap method has received a lot of attention in the literature (see Efron 1979, 1982; Efron and Tibshirani, 1993; Hall, 1986a, 1990, 1992 and Wu, 1986). Given a random sample and a statistic that depends on the sample and possibly on the underlying distribution F, the aim is to estimate the distribution of R,

by that of

where denotes a random sample taken from the distribution Fn. The replacement of F by Fn is called the plug-in'' principle (see Efron and Tibshirani, 1993). Efron (1979) showed that is a reasonable estimate in some simple cases and established the validity of the principle for a general class of statistics when the sampling space is finite. When applying this principle to a particular statistic we should check whether this approximation is good''. The definition of goodness'' might depend on the particular problem at hand. For example, the criteria could be different if we want to estimate moments or percentiles of the distribution . In some cases we are interested in the asymptotic distribution of

 (19)

where is a real number and denotes a parameter of the distribution F. If we want to estimate the limiting distribution of this sequence by the plug-in principle, we should show that the sequence

has the same limiting distribution as (19). Bickel and Freedman (1981) give some general asymptotic theory to answer this question. They prove that under certain regularity conditions the bootstrap approximation to the asymptotic distribution works for sample means, von Mises functionals (Fernholz, 1983), quantiles, and trimmed means among others. In particular they establish the two following theorems for means and smooth functions of means respectively. Let be n independent vector-valued random variables in with common cumulative distribution function F. Let denote their sample mean and Fn their empirical distribution function. Similarly, let be m independent vector-valued random variables in with common cumulative distribution function Fn. Let denote their sample mean. Assume that and let be the covariance matrix of . We have the following theorem (see Bickel and Freedman, 1981):

Theorem 2 (Bickel and Freedman)   For almost all sample sequences, given , as n and m tend to infinity the conditional distribution of converges weakly to .

To state the second theorem we need some notation. Let and be as in the previous theorem. Let

where . Let

and let be a real valued function with finite differential

at . The following theorem states that under regularity conditions bootstrapping commutes with smooth functions.

Theorem 3 (Bickel and Freedman)   Let Sn, , and g be as in the previous paragraphs. If ,

Many papers in the literature deal with the problem of determining the accuracy and order of coverage level of the bootstrap confidence intervals (see for example, Hall 1986a, 1986b, 1988a and 1990). At this stage of our work we are only interested in the asymptotic validity of our methods. Further refinements will be considered in future work (see item 6 in section (5.4)). There are results in the literature showing that the bootstrap also works to approximate the asymptotic distribution of robust estimates. See, for example, Shorack (1982), Parr (1985), Yang (1985), Lohse (1987), Shao (1990), Cheng (1991), Arcones et. al. (1992). See also Dumbgen (1993) and Cuevas et. al. (1993). Bootstrap can be used to calculate confidence intervals via the estimation of the asymptotic variance or by getting approximate percentiles for the limiting distribution (see Hall 1988a for a comprehensive discussion). Two serious problems arise in either case: first, since bootstrap samples are taken with replacement, the proportion of outliers in the bootstrap sample may be higher than in the original one; second, the computational complexity of the robust estimates imposes an upper bound on the number of recalculations that are feasible. We will call the first problem lack of robustness'' of the classical bootstrap. In agreement with Shao (1990), we found that the bootstrap distribution has heavy tails that produce inflated variance estimators and unduly long confidence intervals. Between 2,000 and 3,000 bootstrap samples are needed to estimate the percentiles for a confidence interval (Efron and Tibshirani, 1993). That many recalculations of a robust regression estimate are in practice unfeasible with today's technology.

Next: Robust Bootstrap Up: Approach Previous: The Empirical Approximation of
Department Web Master
2000-05-29