Choose Language Hide Translation Bar
SDH
SDH
New Contributor

Evaluation of the Results by Custom Design

Hello everyone!

I would like your help on this subject:

I have recently created a custom design with JMP using 5 Parameters. I have to evaluate the parameter main effects and the nonlinearity and also parameter interactions based on the results with three output parameters.

Now, the custom design gives me a suggestion on how to evaluate my results or I can choose myself. In this example I uploaded you will find JMP suggestion and my second choice both as saved on the data table on left upper corner. Please help me out on how you would chose the right evaluation, as the optimized settings look slightly different for both, although the values are good for both evaluation methods in my opinion.

 

Thanks in advance!

-Su

 

@martindemel 

 

 

0 Kudos
2 ACCEPTED SOLUTIONS

Accepted Solutions

Re: Evaluation of the Results by Custom Design

The selection of the best model is not always clear. You included many high order terms of which only a few are likely to be active. I used Generalized Regression platform in JMP Pro 15 with the adaptive LASSO method for selection of terms based on minimum AICc. Here is the Prediction Profiler with the fitted models saved as column formulas.

 

Screen Shot 2019-11-29 at 8.23.01 AM.png

 

I attached your data table with scripts for the fitting and the columns with the model formulas for your examination.

Learn it once, use it forever!

View solution in original post

Re: Evaluation of the Results by Custom Design

The prediction profiler is one way to exploit the selected model. I suggest using it to find settings that predict the most desirable outcomes and settings the predict an undesirable outcome.

 

Confirming the model requires new empirical evidence: run your system/process using both sets of settings and determine if the mean outcome agrees with the prediction. This is the only way of confirming that I know of.

Learn it once, use it forever!

View solution in original post

7 REPLIES 7

Re: Evaluation of the Results by Custom Design

The selection of the best model is not always clear. You included many high order terms of which only a few are likely to be active. I used Generalized Regression platform in JMP Pro 15 with the adaptive LASSO method for selection of terms based on minimum AICc. Here is the Prediction Profiler with the fitted models saved as column formulas.

 

Screen Shot 2019-11-29 at 8.23.01 AM.png

 

I attached your data table with scripts for the fitting and the columns with the model formulas for your examination.

Learn it once, use it forever!

View solution in original post

SDH
SDH
New Contributor

Re: Evaluation of the Results by Custom Design

Thank you so much Mr. Bailey!
0 Kudos

Re: Evaluation of the Results by Custom Design

Just to add, a good model can be confirmed. I recommend confirming the predictions for the settings that should produce the desired outcomes and settings that produce bad outcomes. A good model can predict good and bad outcomes.

Learn it once, use it forever!
SDH
SDH
New Contributor

Re: Evaluation of the Results by Custom Design

Do you mean here to set the desirability values for the output parameters and see if desirability level of "1" can be reached? Or is there another way to confirm a good model?
0 Kudos

Re: Evaluation of the Results by Custom Design

The prediction profiler is one way to exploit the selected model. I suggest using it to find settings that predict the most desirable outcomes and settings the predict an undesirable outcome.

 

Confirming the model requires new empirical evidence: run your system/process using both sets of settings and determine if the mean outcome agrees with the prediction. This is the only way of confirming that I know of.

Learn it once, use it forever!

View solution in original post

Highlighted
P_Bartell
Contributor

Re: Evaluation of the Results by Custom Design

To put @markbailey 's confirmation reccommendations another way I'll quote a wise engineer I once worked with during my tenure at Eastman Kodak Company...his name was Dave Neimeyer...and his quote was something like this..."Until you can turrn a failure mechanism on and off, you don't understand root cause."

Re: Evaluation of the Results by Custom Design

I guess he meant that you can check the best setting to get the most desired output by running one or two additional experiments with exactly those best settings. Then you find the setting for the bad outcomes (using the profiler) and do another one or two experiments with these (usually) new settings.
If the experiments for the good settings and the bad settings both match your model results for each of the settings respectively, you can judge to have a good predictive and explanatory model. If you just can predict good or just bad outcomes, your model may tend to predict just one side good. This sometimes happens if you have rare events and use the full data to predict the outcome. Those rare events can be predicted as e.g. good though they are bad just because the error involved by predicting these wrong is so small due to the rareness, even if you say all is good (aka there are no bad results). This is a topic many people fall into a trap. Hope this helps.
0 Kudos