Hi @QW,
The "Fit Definitive Screening Design" is an analysis designed for DSD, as it emphasizes and supports the DOE principles of effect hierarchy, effect heredity, and effect sparsity : Principles and Guidelines for Experimental Design (jmp.com)
- In a first round, the Fit DSD platform tries to find the significant main effects, and create a model based on the identified main effects.
- In the second round, the residuals from the first model are analyzed to find and detect 2nd order terms (dependent from the main effects identified before, following the principle of effect heredity) like interaction terms or power terms.
By default the platform uses "strong heredity" principle (example: interaction A*B may only be analyzed and identified in the second round if both main effects A AND B are identified and significant in the first round), but you can have more "flexibility" in the modeling by de-checking the "Quadratic terms obey strong heredity" and "Interactions obey strong heredity". That will enable weak heredity, meaning that interaction A*B may be analyzed in the second round if main effect A OR main effect B is identified and significant in the first round for example. In your case, the weak Heredity does provide the same model. More infos here : Effective Model Selection for DSDs (jmp.com)
Please also note that DSD, even if powerful and interesting to detect 2nd order effects, are still screening designs, so you won't be able to detect and estimates all the possible effects, and the power related to quadratic terms or interactions terms is quite low compared to main effects for example, so it seems "normal" to have more difficulty in detecting quadratic effects :
If you think there might still be hidden terms that were not detected among the 2nd order effects, I would highly recommend you to augment your design with more runs (with the platform "Augment Design"), to have more confidence in your model (and possibly detect new quadratic effects and/or interaction effects).
The "Stepwise" selection approach is different, more oriented like a "model-agnostic" analysis, where the selection of the terms do not necessarily follow some of the DoE principles (even if you can "lock" some terms like main effects to respect effect heredity/hierarchy), but is focussed on optimizing a criterion (AICc, BIC, ...). This can result in more complex models, as the "safeguards" from other platforms like "Fit DSD" may not be present. If you want to really push the boundaries and explore other designs with the Stepwise platform, you can also click on the red triangle, and try "All Possible Models". You can specify the number of terms in the model, and JMP will try all possible combinations of terms to provide all possible models with the number of terms specified. It can give you some "inspiration" in the modeling, but the results should be taken with caution, as some terms may be entered in the model without any statistical significance (so a little bit of fine-tuning might be needed after) :
With the first model (some terms are added by default but their p-values are above the 0,05 threshold):
It's always interesting to try and compare different modeling options, and even more when domain expertise can guide you in the process. It can help you have ideas about important terms, and may help you in defining how to augment your DoE with new runs, based on a supposed model.
I don't think there is a right or wrong answer here, the key is really to compare and evaluate different models to get a good understanding of the case. Some methods are more conservative than others, but combining different modeling with domain expertise can help you have a broader view about what matters the most. And from then, plan your next experiments to refine your model, and prepare some validation points to be able to assess your model.
I hope this (long) answer will help you,
PS : You can also compare other models done with different platforms like Generalized Regression with different estimation methods (Best Subset, Pruned/2-stage/ Forward selection, ...).
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)