Hello everyone!
I would like your help on this subject:
I have recently created a custom design with JMP using 5 Parameters. I have to evaluate the parameter main effects and the nonlinearity and also parameter interactions based on the results with three output parameters.
Now, the custom design gives me a suggestion on how to evaluate my results or I can choose myself. In this example I uploaded you will find JMP suggestion and my second choice both as saved on the data table on left upper corner. Please help me out on how you would chose the right evaluation, as the optimized settings look slightly different for both, although the values are good for both evaluation methods in my opinion.
Thanks in advance!
-Su
The selection of the best model is not always clear. You included many high order terms of which only a few are likely to be active. I used Generalized Regression platform in JMP Pro 15 with the adaptive LASSO method for selection of terms based on minimum AICc. Here is the Prediction Profiler with the fitted models saved as column formulas.
I attached your data table with scripts for the fitting and the columns with the model formulas for your examination.
The prediction profiler is one way to exploit the selected model. I suggest using it to find settings that predict the most desirable outcomes and settings the predict an undesirable outcome.
Confirming the model requires new empirical evidence: run your system/process using both sets of settings and determine if the mean outcome agrees with the prediction. This is the only way of confirming that I know of.
The selection of the best model is not always clear. You included many high order terms of which only a few are likely to be active. I used Generalized Regression platform in JMP Pro 15 with the adaptive LASSO method for selection of terms based on minimum AICc. Here is the Prediction Profiler with the fitted models saved as column formulas.
I attached your data table with scripts for the fitting and the columns with the model formulas for your examination.
Just to add, a good model can be confirmed. I recommend confirming the predictions for the settings that should produce the desired outcomes and settings that produce bad outcomes. A good model can predict good and bad outcomes.
The prediction profiler is one way to exploit the selected model. I suggest using it to find settings that predict the most desirable outcomes and settings the predict an undesirable outcome.
Confirming the model requires new empirical evidence: run your system/process using both sets of settings and determine if the mean outcome agrees with the prediction. This is the only way of confirming that I know of.
To put @markbailey 's confirmation reccommendations another way I'll quote a wise engineer I once worked with during my tenure at Eastman Kodak Company...his name was Dave Neimeyer...and his quote was something like this..."Until you can turrn a failure mechanism on and off, you don't understand root cause."