Thanks for the upload and adding the comments @maria_astals !
Some complementary comments and informations from my side :
Question 2: How to avoid Pitfalls
Besides the excellent points mentioned in the discussion and comments, the article "Eight Keys to Successful DOE" from Quality Digest really summarize and explain well the most important aspects to consider when building designs :
- Set good objectives
- Measure responses quantitatively
- Replicate to dampen uncontrollable variation (noise)
- Randomize the run order
- Block out known sources of variation
- Know which effects (if any) will be aliased
- Do a sequential series of experiments
- Always confirm critical findings
Question 3: Mistakes using Definitive Screening Designs (DSD)
A specific analysis platform for DSD, The Fit Definitive Screening Platform, has been developed to leverage the specific foldover structure of the DSD to make the analysis easier and enable the detection of 2-factors interactions and quadratic effects following the principle of effect heredity. This analysis can be extended and used to any design with a foldover structure, not only for DSDs, see article from Bradley Jones :
However, even if this analysis method is very powerful, it may happen that some terms are not detected even if they may be active, due to low power for higher order effects and possible low effect size, example : Fit Definitive Screening vs. Stepwise (min. AICC) for model selection
Other platforms may be helpful like The Fit Two Level Screening Platform or Generalized Regression Models (with different estimation methods like Best Subset, Pruned/2-stage/ Forward selection, ...), as they can help you build different models, using different methods to detect active effects and estimate parameters.
As the saying goes "All models are wrong, some are useful", so the key is really to compare and evaluate different models to get a good understanding of the system. Different methods for selecting a model can yield different solutions. Choosing a relevant evaluation metric to compare the different models (depending on the objective), compare the different models and selecting and/or combining different modeling methods with domain expertise can help you have a broader and more precise view about what matters the most. And from then, you can plan your next experiments to refine your model, and/or prepare some validation points to assess your model performance.
(I have submitted an abstract for JMP Europe Discovery 2025 using a Mixed-Level Screening Design and the different modeling platforms mentioned for the analysis of the responses, I hope to be selected to present this platforms and models comparison).
Question 4: Moving from DoE to Taguchi experiments - robust design
There may be several definition of robustness and different ways to create a robustness DoE, depending against which factors you want to test the robustness of your experimental space/optimum point : against external noise factors, against variation in process/experiment factors, or against a combination of both factors. Here is a serie of articles from Stat-Ease explaining the type of designs according to which robustness situation you are interested in :
- Robustness against external noise factors : https://statease.com/blog/achieving-robust-processes-via-three-experiment-design-options-part-1/
- Robustness against variation in our set points for process factors : https://statease.com/blog/achieving-robust-processes-via-three-experiment-design-options-part-2/
- A combination of the first two types : https://statease.com/blog/achieving-robust-processes-via-three-experiment-design-options-part-3/
Some ressources that may be helpful as well :
Designing Robust Products (2020-US-30MP-539)
How to Design for Robust Production Processes | JMP
Hope these info may help JMP users, happy experimentations !
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)