cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
ZenCar
Level I

DOE: using subset full factorial to confirm results of a descriptive screen design

Hi,

 

Background: 
- I ran a 8-factor DSD and identified a total of 5 factors and 5 interactions are significant.
- I selected two significant factors (each with three levels) and ran a 9-run full factorial DOE for the purpose to validate the DSD
 
Result:
- none of the two factors or their interaction are significant: disagree with DSD
- the two factor does not has interaction: agree with DSD
- quadratic effects are not significant: agree with DSD
 
Please comment or advise.
 
Thanks.
 
 
4 REPLIES 4
P_Bartell
Level VIII

Re: DOE: using subset full factorial to confirm results of a descriptive screen design

A few things to check/ask. There can be any number of explanations or things to look at:

 

1. How did the levels of the full factorial compare to the levels of the DSD? Identical, wider, narrower?

2. You don't share the factors...so it's hard to know...but did you have raw material variation between the DSD and the full factorial? For example, a different batch from the two experiments might explain what you are seeing?

3. Did the same staff run the experiments? There might be operator variation.

4. How about the measurement system(s) for the responses? Did they change or do you even know the variation of these systems?

5. You use that dreaded word 'significance'. Some folks take a very strict view of 'significance' and use 'p - values' as a cliff to establish significance. Hopefully you aren't one of those folks...for example, these folks would say (presuming an 0.05 value is your 'cliff') that an effect whose p - value is 0.049 is significant, whilst another effect whose p - value is 0.051 is not significant. I'd argue both effects are worthy of further investigation. How do the p - values compare for consistent effects across the experiments.

6. How do the residuals compare using the model from the DSD to the actual results you observed in the full factorial? Are they practically useful/consistent...forgetting about p - values and such.

7. How does the responses vs. factor levels and experimental execution order compare across the experiments? Are they at least directionally consistent?

8. Lastly, can you share the results/data?

Re: DOE: using subset full factorial to confirm results of a descriptive screen design

What did you do with the other six factors that were in the DSD when you ran the two factors to confirm?

Re: DOE: using subset full factorial to confirm results of a descriptive screen design

To add to @Mark_Bailey's comment: if the two chosen factors are involved in interactions with factors that are NOT part of the CCD, it is very possible that the settings of those factors "negate" the main effects.

 

Why choose only two of the five active factors? Why run a full DOE on those two factors? Often to verify a model only a few select points of interest are explored. Nothing wrong with doing it this way, I just find it curious so I just want to understand.

Dan Obermiller
statman
Super User

Re: DOE: using subset full factorial to confirm results of a descriptive screen design

@P_Bartell has given you a nice list of considerations.  As he says, there are many possible reasons.  First and foremost is handling of noise (factors that varied but were not explicitly manipulated or were held constant during the experiment).  If you held noise constant during your DSD and any of these changed during your Factorial, the inference space of your DSD is the issue.  I can't tell you how many times (I've lost count) of situations where a design (factorial, optimal, DSD, etc.) although quite efficient/effective in its handling of the design factors does very little to assist in creating a design space representative of future conditions. Any experiment without consideration to noise strategy will result in a similar fate.

"All models are wrong, some are useful" G.E.P. Box