cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
SaraA
Level III

Analysis Design of Experiments

Hi JMP Community,

 

I have two questions regarding the analysis of the Design of Experiments I performed:

 

1) When performing a screening DOE (Plackett-Burman design) and one of the model terms has a positive effect on the dependant variable but it does not appear to reach statistical significance, does it make sense to still include this factor when performing the second, optimization DOE? When a factor does not have a statistically significant effect on the outcome in a screening DOE, it does not necessarily mean that the effect of this factor on the outcome is negligible I would think. Is this correct?

 

2) When analyzing a DOE and reducing the model by removing non-significant terms, I observe that the lack-of-fit test becomes non-significant after reducing the model while it was significant before reducing the model. How should I interpret this? Does it mean that by removing these terms, the pure error of the model increases, which subsequently decreases the lack-of-fit?

 

Thank you,

Sara

4 REPLIES 4
P_Bartell
Level VIII

Re: Analysis Design of Experiments

With regard to questions contained in 1.), here's my thoughts. Don't get hung up on statistical significance. First and foremost you are trying to solve some practical problem. If your domain knowledge tells you that 'non significant' factor is important...then by all means include it in subsequent experimentation. The reasons behind 'non significance' can be many and varied. Excessive measurement system noise. Maybe the factor levels were too narrow for the signal to rise above the noise. And don't fall into the  p - value cliff mentality mononumerosis associated with the talismanic 0.05 value...you're gonna tell me you'd throw out the factor if the p - value is 0.051 and keep it if it's 0.049?

 

As for question 2.) More thoughts...lack of fit tests are nice...but what do the residual plots tell you? Swapping degrees of freedom in and out of a lack of fit test is like trying to hunt for the 'winner'. Not something I'd spend alot of time on. Back to the practical problem. Focus on how whatever model you end up helps you solve the practical problem...not an F - Test for LOF.

 

My five cents...others may feel differently.

SaraA
Level III

Re: Analysis Design of Experiments

Thank you @P_Bartell, this was my thought process as well for both of the questions I asked.

Re: Analysis Design of Experiments

Regarding your first question, was the factor range wide enough to elicit a strong effect (i.e., a change in the response)? Limiting the range around a value expected to be a good level can decrease the power of the tests for a non-zero estimate.

SaraA
Level III

Re: Analysis Design of Experiments

@Mark_BaileyI used the widest range possible that was not toxic to the cells I was testing the compound/factor on. Otherwise, this would have given me limited information since all cells would be dead.