cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
MichaelM92
Level I

How to determine term significance based on power value?

Using the design evaluation tool in JMP, I compared a Box-Behnken design with a Custom design (with 28 runs) to evaluate 5 factors with 3 levels each (see below):

MichaelM92_0-1676040450554.png

Given the comparison, the custom design was chosen for the experimental runs. In the evaluation, I found the quadratic terms to be statistically significant (p<0.05) for one or more responses, despite the power for these terms being low as seen in the design evaluation table. In some cases, I do not find these to be practically significant with my knowledge of the process. Would the recommendation be to remove these terms from the model? What other metrics should be taken into account? 

2 REPLIES 2

Re: How to determine term significance based on power value?

The significance of these terms means that they contribute to the accuracy of your model and its predictions. Removing the terms means that their fixed effects will be counted as random effects and inflate the confidence intervals of predictions. I do not see a reason to remove these terms. What is the benefit you expect from removing these terms?

 

What is the basis for your statement, "I do not find these to be practically significant with my knowledge of the process."? What do you mean?

statman
Super User

Re: How to determine term significance based on power value?

First, welcome to the community.  Realize, design evaluation tools are based on certain assumptions (many of which are unknown before the experiment is run).  There are a number of factors that affect design selection.  All of these are dependent on your predictions. See:

https://www.jmp.com/support/help/en/17.0/?os=mac&source=application#page/jmp/overview-of-the-evaluat...

 

For your situation, you chose a design and ran it, correct?  How did you go about doing the analysis?  If I interpret your analysis correctly, it sounds like you did not create any practically significant variation in the experiment?  Why not?  There are 3 reasons for this:

1. Levels were not bold enough

2. Measurement systems were not adequate

3. The factors don't matter

(BTW, this is different from the case where you created practically significant variation but it is not assignable to the factors you manipulated).

What DF's were used for estimate of the random errors (MSE)? Was there any replication?  When you say the quadratic terms are statistically significant, based on what comparison?  Is it possible to share your data table?

 

In general, if you remove significant items from the model, their variance is pooled in the error term.  This will inflate the error term and make other factors less significant.

 

Not sure what you mean by "What other metrics should be taken into account?".  What is the goal of the experiment?  Are you trying to understand causal structure or "pick a winner"?  Are you building or refining a model?  There are a number of statistics that can be useful to help build your model.

"All models are wrong, some are useful" G.E.P. Box