cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
lazzybug
Level III

Can I remove one experiment from DSD?

I want to use DSD to investigate 7 parameters with 24 experiments. However, I can only run 21 experiments or 25 experiments. I have 24 experimental devices, so I want to use all of them. Can I design 25 run, but only run 24? I noticed that if I delete one of runs, I cannot use fit the definitive model. Can I use stepwise method to fit the model instead? 

 

There is another case, how to run the model if we fail one run? To rerun the failed experiment, it's very time consuming, which is a big hurdle for our team to use DSD.

 

Thank you so much for your help.

4 REPLIES 4
louv
Staff (Retired)

Re: Can I remove one experiment from DSD?

In my opinion this would just be like having one run of the 25 be a failed run. You may not be able to use the fit definitive model but certainly you could model your design using other methods.

Hopefully you have material to verify the model.

Victor_G
Super User

Re: Can I remove one experiment from DSD?

Hi @lazzybug,

The answer from @louv is very good.
Just to add a little more questions on your case and some solutions :

  • Did you try to generate several types of designs to compare them according to your target ? Maybe Optimal designs (like Alias-Optimal designs, with interactions and quadratic effects terms estimability fixed as "if possible") could be a good choice, offering more flexibility for the modeling and number of experiments to realize.
  • Another option would be to generate the 21-runs DSD, but to add 3 randomized replicate runs (and/or centre points ?) in order to have the 24 runs you would like to have.
  • Removing one experiment from your 25 runs DSD will certainly make you lose some optimality, statistical power and parameter estimates precision, but as you and louv stated, the use of other modeling techniques could help you analyze your DoE results.

For these three options, I would highly recommend you to check and compare the different designs using the "Compare Designs" platform in "DoE -> Design Diagnostics" menu. This will help you make the best choice depending on your constraints and experimental budget.

 

One last idea to fully exploit DoE results (even in the case of a missing run) would be to use SVEM in order to get as much precision as possible on parameter estimates. You can watch the presentation from Simon Stelzig presenting some cases with different incomplete designs, but getting relevant and interesting models with the use of SVEM here : SVEM (Self-Validated Ensemble Modeling): A Path Toward DOEs with Complex Models ... - JMP User Commu...

Hope this will help you,

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
lazzybug
Level III

Re: Can I remove one experiment from DSD?

Hi @Victor & @louv, thank you so much for your quick reply.

 

As you stated I can choose 21 run plus 3 replicate to fit our 24 devices. I also compared other methods but found DSD is the best solution. The custom design needs at least two blocks to investigate main effect and partial interaction and curvature, which is not good as comparing with traditional screening first and then optimization. The benefit is if only three factors are significant in the model, I can fit the data with main effect, interaction and curvature. My only concern is the power in the curvature term is not high enough (=0.264), should I augment the experiment or it's enough? 

 

Thank you so much for your answers again.

Victor_G
Super User

Re: Can I remove one experiment from DSD?

Hi @lazzybug,

For details about power, you can re-read the responses @Phil_Kay and I have given to you on a previous topic : Solved: Should I consider power analysis in DOE? - JMP User Community

In your specific case, it's normal to have a lower power for quadratic effects than for main effects in a DSD, but hard to tell if it's "high enough".


My practical advise would be :

  1. Run your original design and do the modeling,
  2. Prepare and run some validation points (with settings not fully tested in the DoE, in an area of interest for example). These validation points can be part of the augmented design, (= some of the new points recommended by JMP when you augment your original design), in order not to lose any experimental budget/time.
  3. Compare actual values vs. predicted values on your validation points. Does the model seem to be adequate for your system ? Is the model adequate for your target/purpose ?

If your initial budget is 24 runs, maybe a good compromise would be to run the full 21-runs DSD proposed by JMP, and 3 points from the augmented DSD as validation points, to check if the first model is sufficient enough for your needs.

Hope it will help you,

Victor GUILLER
L'Oréal Data & Analytics

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)