Hi @SaraA,
The examples shown in JMP modules are here to bring the design & analysis basics to non-statisticians who would like to "get started". Obviously the datasets are easy to follow up, and analysis is simplified, so that everyone can understand and try it.
But as @statman mentioned, the topic of modeling is a lot more vast (and sometimes complicated) than "only" relying on p-values. Depending on your objective(s), you may have different paths to models evaluation and selection :
- Explainative model : In an explainative mode, you're more focussed on the terms that do have some influence on the response(s), so you might evaluate the need to include the different terms based on statistical significance (with the help of p-values and a predefined threshold for it like 0.05) and practical significance (size of the estimates, selection based on domain expertise). R², R² adjusted (and the difference between the two, which needs to be minimized) might be good metrics to understand how much variation is explained by the identified terms, and select relevant model(s) to explain your system under study.
- Predictive model : In a predictive mode, you're more focussed on the terms that help you minimize prediction errors, so you might evaluate the need to include the different terms based on how this improve the predictive performances, through the visualizations of actual vs. predictive plot, and size of the errors (residuals plot). RMSE might be a good metric to assess which model(s) have the best predictive performances (goal is to minimize RMSE).
You might also be interested by a combination of the two parts, so different metrics could be used to help you evaluate and select model's, like information criteria (AICc, BIC) that help find a compromise between predictive performances of the model and its complexity. To evaluate and select a model based on these criteria, the lower the better. You might also use maximum likelihood which is similar but does not include a penalty for the complexity of the model.
I would recommend to be cautious about p-values and avoid the "Cult of Statistical Significance" : Solved: Re: Statistical Significance - JMP User Community
To create your model, there are a lot of platforms available in JMP (Fit Model and Generalized Regression models, Fit Two Level Screening, Fit Definitive Screening, ...), that use different techniques, estimations or validation criteria, sometimes depending on your design choice. You can try them and see how and when your model agree and when/where they disagree. Try plotting the outcomes of your different results through Raster plots or easier plots like this one :
Depending on the platform used, you can check how well the models are in agreement, and with the use of a specific metric adapted to your objective, you can more easily choose on or several models.
Coming back to your original question, statistical significance and practical significance should be considered when modeling response(s), and the resulting model should be confronted to domain experts as well as experimental validation.
Hope this complementary answer might be helpful, even if late,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)