Showing results for 
Show  only  | Search instead for 
Did you mean: 

Practice JMP using these webinar videos and resources. We hold live Mastering JMP Zoom webinars with Q&A most Fridays at 2 pm US Eastern Time.See the list and register. Local-language live Zoom webinars occur in the UK, Western Europe and Asia. See your country site.

Choose Language Hide Translation Bar
Selecting Proper Sample Size for Your Designed Experiment


See how to examine:

  • Margin of Error for One Sample Mean
  • Margin of Error for One Sample Proportion
  • Margin of Error for One Sample Variance
  • Margin of Error for Two Independent Sample Means
  • Two Independent Sample Proportions
  • Margin of Error for Two Independent Sample Variances
  • Power for One Sample Mean

  • Effect of N trials on designed experiments and compare results


Note: Q&A is interspersed throughout the video.


Why use Sample Size Explorers for DOE, introduced in JMP 16? To balance our resources (don't want to run too many tests) with desired outcomes.  These outcomes depend on desired Confidence and Power, along with some test specific estimates.   Answer questions, including

  • How many units should I test?
  • Will I be able to detect a difference in my treatment means?
  • How many samples are needed to construct an interval with a specified width?
  • How many units must I test to estimate failure time?


About Statistical Confidence

If we collect a sample of parts and make a measurement on each of those parts, we can calculate a mean and a standard deviation for that specific sample. The sample was drawn from a "population" of parts, which has its own true mean and true standard deviation. We have no way of knowing the true population mean and standard deviation (unless we measure ALL PARTS in the population, which is generally cost-prohibitive.) The sample mean and standard deviation are likely not going to be the same as the population mean and standard deviation. We would like to put bounds on a range of values where we think the total population mean might really be.


To determine this range, we need to have some Confidence in our results.  This can be any percentage between 0 and 100%.  Often it is chosen to be 95%, but it depends on your application. So after some calculations, we end up with a statement like "Our sample of 15 parts had a mean of 13.1.  We have 95% confidence that the true population mean from which these 15 parts were drawn is between 10.4 and 15.8. This means that 95% of the time (or 19 times out of 20) I will be right.  The true population mean truly would be between 10.4 and 15.8.


The remaining 5% of the time (1 time out of 20) we would be wrong.  The true population mean would either be less than 10.4 or greater than 15.8.

Note that if we draw another sample of 15 parts, we'll get another answer.  Perhaps we get a sample mean of 13.3 with a different sample standard deviation.  Now our  95% confidence interval might be 10.9 to 15.7. If we draw yet another sample, this time of 30 parts, we might produce a tighter 95% confidence interval, say 12.1 to 14.5.  It is tighter because we have a better estimate of the true population mean and standard deviation, by virtue of the larger sample size.


About Statistical Power

This generally involves being able to detect a DIFFERENCE. For example, we want to know if the true population mean is 15, within +/- 1 unit. We intend to draw a sample of 25 parts from the population.  We expect the standard deviation of the parts will be 1.0 units. When we estimate the actual mean of the population, we want to have 95% confidence in the estimate.


How often will we be right in my assumption that the true population mean is 15 +/- 1?  This is STATISTICAL POWER.  In this case we would be right 40% of the time. So, if we draw multiple samples of 5 parts each, and we draw a conclusion each time about whether the population mean is truly 15 +/- 1, we will only be right 40% of the time.We can increase power by increasing our number of samples, or by widening the difference to detect, or by reducing our confidence in the estimate of the population mean. Power = probability of correctly rejecting the null hypothesis.  We are typically only interested in power when the null is actually false.





There was a question during the session on the nature of the anticipated coefficients; whether they were on the scaled factors or the actual levels. I can confirm from a fellow developer that they are in reference to the scaling Fit Model uses in its models (e.g. Save the Prediction Formula and note the factor centering and scaling used there). 


Thanks for the clarification @calking!


Also, as I reviewed my journal, I found some errors in the 'Type 1 and Type 2 Errors' chart that was shown in the video.  I have corrected the errors in the attached journal, and in the image below:


Type 1 and Type 2 Errors.png

#JMP16, #Type_1_Type_2_Errors


Recommended Articles