Hi @ADouyon !
I'm glad the answers were helpful. I don't know how common this practice may be, but I personnaly prefer to spend more time comparing different designs and sample sizes than going to the lab and do the experiments quickly, to finally see that I may have forgotten a constraint, or that my design is not well suited enough for my needs.
--> To illustrate this, I really like this quote of Ronald Fisher : "To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of."
Different designs may lead to different interpretations of the results, depending on which terms are in the different models. For example, screening designs with main effects only may help filter relevant factors for the follow-up of a study, but may clearly lack precision and response(s) predictivity in presence of 2 factors interactions and/or quadratic effects (possible lack of fit).
You have several options to generate different designs and compare them:
- Change (add/remove) the terms in the model (2 factors interactions, quadratic effects, ...) or their estimability (you can switch the estimability from "Necessary" to "If Possible" for 2FI and Quadratic effects).
- Vary the sample sizes (number of experiments) in the design by adding replicate runs : are the extra runs relevant and useful enough to have a gain on estimates precision or relative variance prediction over the experimental space ?
- Change the optimality criterion (depending on your target : D-optimality for screening, I-optimality for good response prediction precision, Alias-optimality to improve the aliasing between terms in the model...)
- Change the type of design (if possible !)...
For the moment, to compare different DoEs you have to create them in JMP and then go to "DoE", "Design Diagnostics", and then "Compare Designs". But ... Spoiler alert ! : In JMP 17, it will be a lot easier to compare several designs (see screenshots), as a "Design Explorer" platform will be included in the DoE creation, so no need to manually create them one by one : you can create several designs in seconds, and filter them based on your objectives and decision criteria.
And finally, take advantage of "iterative designs" and don't expect or try to solve all your questions with one (big) design. It may be more powerful and efficient to start small with a screening design for example, and then use Augmentation to search for more details, like interactions and possible non-linear effects. Finally, if your goal is to optimize something, you can once again augment your design to fit a full Response Surface Model. At each step, you gather more knowledge about your system, and make sure that the next augmented design will bring even more knowledge, without wasting experiments/runs, time and ressources.
"Augment design" platform is really a powerful (and often underestimated) tool to gather more data and gain more understanding and build more accurate models without losing any previous information.
I hope this answer will help you,
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)