cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
gail_massari
Community Manager Community Manager
Introduction to 1-click bootstrapping

Phillip Ramsey- JMP User Community @philramsey, Consultant and Professor, University of New Hampshire

This article originally appeared in JMPer Cable, Issue 28, Summer 2013.

Bootstrapping is an important capability offered in JMP Pro. The implementation of bootstrapping is simple and flexible because bootstrapped distributions can be generated within most JMP Pro statistical reports. This article provides a brief tutorial on basic bootstrapping and describes applications of bootstrapping within the contexts of standalone inferential methods, pedagogy, and as a component of other statistical methods.

Bootstrapping is a common English language metaphor for a process that is self-sustaining and continues without need for an external driving force. Bradley Efron (1979) adopted the metaphor bootstrap to describe a new resampling technique he had developed. His work was motivated by earlier work on jackknifing (Tukey, 1958).

Statistical bootstrapping as developed by Bradley Efron (1979) is a process whereby independent, random samples of size N are repeatedly drawn (possibly thousands of times) with replacement from an original data set of size N. Replacement means that any individual observation in the original data set may be selected multiple times or perhaps not all in any one resample. This assures that each bootstrap sample is very likely to differ in membership from the original data.

By computing sample statistics of interest on each resample, empirical distribution functions for these statistics are generated. The empirical distributions can in turn be used to estimate standard errors for these statistics and estimate confidence intervals for theoretical parameters the statistics estimate. The bootstrap confidence intervals (there are a number of versions of them) have the advantage that the computation does not necessarily require use of a parametric or a specific mathematical formula. Hall and Wilson (1991) show that bootstrapping is easily adapted to hypothesis testing.

The most straightforward way to compute a bootstrap (1-α)100 percent confidence interval for a theoretical parameter is to use the appropriate percentiles of the bootstrapped distribution for that sample statistic, which estimates the theoretical parameter of interest. As an example, the 2.5th and 97.5th percentiles of the bootstrapped distribution of a sample average can be used to form a 95 percent confidence interval for the theoretical mean of the underlying distribution.

Bootstrap research found that in some cases, the bootstrap percentile confidence intervals could be substantially biased in terms of the actual coverage (Chernick, 2008). Bias correction algorithms exist for percentile confidence intervals. However, they are not currently available in JMP Pro and further discussion is beyond the scope of this article. (Updated December 2018: Bias correction algorithms are now included in JMP Pro.)  

Statistical inference based on the bootstrap

The following example shows how to use JMP Pro for bootstrapping to generate percentile confidence intervals for theoretical parameters of target populations. The example uses 20 temperature readings (Kelvin) originally from Cox and Snell (1981).

Suppose you want to find confidence intervals for the interquartile range (IQR) of the sample, which is the difference between the 25th and 75th percentiles of the sample distribution. Exact computation of confidence inter-vals for the IQR is generally not possible.

Begin by using the Distribution platform in JMP to generate a table of summary statistics for the original sample. The interquartile range does not show by default in the Summary Statistics table, so use the red triangle menu to request it, as shown in Figure 1. Also, because the bootstrap process operates on all statistics showing in the table to which it refers, uncheck all the default summary statistics that are not of interest in this example. The Distribution Summary table then displays only the interquartile range, as shown in Figure 1.

Figure 1 Tailor Summary Statistics table to show only Interquartile RangeFigure 1 Tailor Summary Statistics table to show only Interquartile Range

Next, place the cursor in the body of the report, right-click and select Bootstrap from the menu that appears (Figure 2). In the Bootstrap dialog, select the Fractional Weights option – this option provides smoother bootstrap samples Rubin (1981).

Figure 2 Choose the Bootstrap with sample size 200Figure 2 Choose the Bootstrap with sample size 200

This example generates 200 samples. You can enter any number of samples you want, but keep in mind that each sample reruns the Distribution platform and computes the interquartile range for that sample. So, the number of samples you choose depends on the computational resources you have. A large bootstrap request can be both resource and time intensive.

When the bootstrap process is complete, you see a JMP table with the sampling results. These are columns called BootID and Interquartile Range for the Temperature variable. The number of rows is the number of samples you requested plus one for the original sample computation. Now use the Distribution platform to look at summary statistics for the bootstrapped Interquartile Range.

JMP platforms that have the bootstrap option recognize the BootID variable and display the Bootstrap Confidence Limits table, as shown in Figure 3.

Figure 3 Distribution of bootstrapped Interquartile RangeFigure 3 Distribution of bootstrapped Interquartile RangeThe interquartile range computed in the original sample is 28.25 (Figure 1). Note that the interquartile range computed for the bootstrap sample of 200 is 25.4 and that the 95 percent bootstrap confidence limits, taken from the quantiles for the sample, are 7 and 45, which include the original estimate.

The bootstrap in education Bootstrapping is starting to be adapted in statistics education (Lock, 2012) to transition students from exploratory data analysis (EDA), where sample statistics are introduced, to confirmatory data analysis (CDA), where formal statistical inference is first introduced.

 From the author’s experience in teaching undergraduate, graduate, and Six Sigma courses in statistics, students have a difficult time grasping the concepts of a sampling distribution, standard error and confidence interval or hypothesis test). That is, they struggle with the basics of statistical thinking.

 Unfortunately, even hand computation of standard errors and parametric confidence intervals, when such computations are feasible, does little to motivate conceptual understanding on the part of the students. Often students become proficient in arithmetic computation without developing real conceptual understanding of the inferences to be drawn from such computation. This usually manifests itself by students correctly computing parametric confidence intervals (for example) and then subsequently stating incorrect or even nonsensical interpretations of those intervals. The students have apparently become proficient in arithmetic computation without the requisite ability to think statistically.

Beginning with the release of JMP Pro 10, we introduced bootstrapping into the curricula for both academic and Six Sigma statistics courses. Rather than begin CDA with the traditional discussion of parametric sampling distributions, standard errors, and confidence intervals, first introducing bootstrapping motivates these concepts.

Anecdotally, the results so far have been quite encouraging in terms of students demonstrating deeper conceptual understanding of sampling distributions, confidence intervals, and hypothesis testing – the students seem more adept at proper statistical thinking.

Bootstrapped hypothesis tests are not directly available in JMP Pro, however quite several the more common hypothesis tests can be easily performed from the bootstrap results provided by JMP Pro. See Hall and Wilson (1991) for more detail on bootstrap hypothesis testing.

Bootstrapping in other statistical platforms

Besides being used as standalone methods for statistical inference, bootstrapping methods are increasingly incorporated into other statistical methods. A good example is partition modeling, where bootstrapping concepts are used to grow a random ensemble (or forest) of individual decision trees. JMP Pro has implemented this concept in the Partition platform with Bootstrapped Forest option.

Bootstrapping has also been incorporated into ANOVA, MANOVA, discriminant analysis, and regression methods. Many of the JMP Pro platforms for these analyses do have bootstrapping available. Unfortunately, bootstrap hypothesis tests in some cases require a bootstrapped standard error for each of the bootstrap samples; no exact computation exists for the standard error. The bootstrap standard error for each bootstrap sample is computed by the use of a double bootstrap (Chernick, 2008). Double bootstrapping is not currently supported in JMP Pro.

Overall, the use of bootstrapping and randomization tests in statistics education is growing and is only limited by the technology available to students, teachers and trainers. Its use is also consistent with the GAISE Report (ASA, 2012) recommendations for statistics education. JMP Pro provides a nice, easy-to-use platform for teachers to incorporate bootstrapping into the statistics course curricula.

Summary

In this article, we have discussed the uses of bootstrapping both as a form of statistical inference, especially where standard errors and confidence intervals cannot be easily calculated, and as an important pedagogical tool for statistics education. With the advent of ever more powerful computers, the use of bootstrapping and related methods will no doubt grow. JMP Pro provides a straightforward, easy-to-use bootstrap-ping capability such that the JMP user can incorporate bootstrapping into statistical analyses and into the curricula for statistics education. 

References

    1. Chernick, M. (2008). Bootstrap Methods: A Guide for Practitioners and Researchers, Second Edition. Wiley and Sons: NY, NY.
    2. Cox, D. R. and Snell, E. J. (1981). Applied Statistics: Principles and Examples. London: Chapman and Hall.
    3. Efron, Bradley (1979). Bootstrap Methods: Another Look at the Jackknife, The Annals of Statistics, 7, 1-26.
    4. American Statistical Association (2012). Guidelines for Assessment and Instruction in Statistics Education 
    5. Hall, P. and Wilson S. (1991). Two Guidelines for Bootstrap Hypothesis Testing. Biometrics, 47, 757 – 762 
    6. Lock, et. al. (2012). UnLocking the Power of Data. Wiley and Sons: NY, NY 
    7. Rubin, D. B. (1981). The Bayesian Bootstrap, The Annals of Statistics, Vol. 9, No. 1, 130 – 134.
    8. Tibshirani, et. al. (2008). Elements of Statistical Learning, Second Edition. Springer Verlag.
    9. Tukey, J.W. (1958). Bias and Confidence in Not-quite Large Samples, The Annals of Mathematical Statistics. Vol. 29, 2, pp. 614.

Philip J. Ramsey, PhD, owns the North Haven Group, a quality and statistics consulting firm offering full service training and consulting in all levels and phases of Six Sigma as well as comprehensive training in design of experiments and predictive analytics. Ramsey is also a faculty member in the Department of Mathematics and Statistics at the University of New Hampshire (UNH). He is co-author of Visual Six Sigma from SAS Press.

Last Modified: Dec 18, 2018 11:02 AM