cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
CBroomell1
Level II

Regarding Repeat Experiments in DoE

Hello All,

I have what I am hoping will be a simple problem for the group.

 

I recently set up a DoE with three factors, each having discrete numerical values at 3 levels.  I specified cross interactions with 2nd order effects with the 18 runs to be executed in 3 blocks.  I did NOT specify any repeats to be designed, however, I notice in the design matrix that 4 of the runs are duplicated.  In other words: the 18 run design comprises 14 unique combinations and 4 which are duplicates.

 

First: is this atypical?  I had thought that the 18 runs should all be unique.  Is there some kind of powering that JMP is trying to achieve with this?

-or-

Second: could there be something that I'm missing in my initial setup and we're actually NOT really probing the design space correctly?

 

Unfortunately the first two blocks have been initiated and include ONE of the repeats (i.e., I have 11 unique runs with 1 repeat, so far). 

If I assume that the design is appropriate...it follows that we can continue according to the original plan and evaluation (correct?).

However, if the feedback is such that this ISN'T the best approach: 1) what would we be missing if we continued with the original design?  2) Is it possible to reconfigure the last random block to substitute alternative conditions for the repeats?

 

Thanks in advance for your input on this!

 

Chris 

11 REPLIES 11
CBroomell1
Level II

Re: Regarding Repeat Experiments in DoE

Hi Mark and Victor,

 

Thanks to both of you for your input...definitely helpful and I appreciate both.  I may reach out once we have the final block finished if I have questions regarding the assessment.

 

 

Victor_G
Super User

Re: Regarding Repeat Experiments in DoE

Hi @CBroomell1,

 

And welcome in the community !

 

We may lack informations to guide you more precisely with your DoE approach, but the question regarding replicate runs can be answered. Regarding your design, nothing to worry about from what you're describing, this is common :

 

  • With 3 discrete numeric factors with 3 levels, depending on the model you supposed, here are the maximum number of terms you may want to estimate : 1 intercept, 3 main effects (X1, X2, X3), 3 two-factors interactions (X1X2, X1X3, X2X3) and 3 quadratic effect (X1*X1, X2*X2, X3*X3) if you want to estimate curvature for these factors (+ 1 main effect if you specify a fixed blocking factor, +0 if you're specifying random block). So that means you need at least 10 (or 11 in case of blocking factor) runs in order to estimate these effects. The problem by using the minimal number of runs is that you have no degree of freedoms left to estimate pure error and perform a lack-of-fit test, to assess whether the model fits the data well. So by default JMP will recommend in such situation more runs, to be able to test the model and have more confidence in the estimates (and be able to balance the runs between the blocks), and will create replicate runs. Adding replicate runs adds precision for some estimates and improves the power of the lack of fit test.
  • These replicate runs need to be done and measured independantly (meaning you have to do the experiments as the others, in the same conditions) by following the randomization done by JMP, and not measuring only one experiment several time to fill the response for the replicate runs. Repeats (meaning repetitions of only the measurements) help decrease variance of the measurements, but may not help regarding parameters estimation and lack-of-fit test.

 

In order to gain more confidence and knowledge about DoE, I can highly recommend the Design of Experiment Intro Kit, and to try the module about DoE from the STIPS courses.

If you want to describe your situation (without sharing confidential data) and have a feedback regarding your DoE in more details, don't hesitate to share an anonymized datatable or dummy data to show your case.

Hope this will help you,

 

 

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)