cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar

DoE model validation /verification feature

I would like to suggest a feature to be added for validation runs of your DoE model. Usually when I have the data for my DoE model I run an additional 3-5 runs to validate the model and compare the outcome of the runs to what the model predicted. It would be very useful to have this as a feature in JMP to augment the design with additional runs and compare the results to the model predictions by a test of equivalence. 

8 Comments
Status changed to: Acknowledged

@ASTA - Thank you for your suggestion! We have captured your request and will take it under consideration.

SamGardner
Staff

@ASTA thanks for the idea.  You can do this, currently, in JMP if you add the additional experimental rows manually, and add a grouping column with labels for the runs (e.g. "Experiment", "Confirmation" as the group labels), and then run the same model but add the group column into the model.  Within the effects you can do an equivalence test, underneath the multiple comparison's analysis (LS Means Student's t, or other comparison).   Does that meet your needs or are you looking for something else?

 

SamGardner_0-1665679638106.png.             

 

SamGardner_1-1665679669765.png

 

 

SamGardner
Staff
Status changed to: Needs Info
 
mia_stephens
Staff

Hi @ASTA , does Sam's suggestion meet your needs? Also wondering if SVEM with JMP Pro (via Generalized Regression) might also be an alternative for validation of DoE models.

mia_stephens
Staff

Hi @ASTA , just following up on this request. With the suggestion provided by @SamGardner will mark this as delivered. But, please let us know if this does not satisfy your request.

mia_stephens
Staff
Status changed to: Delivered
 
ASTA
Level II

Hi. Thanks for the reply. I will try this out.

Victor_G
Super User

@mia_stephens @SamGardner,

I might have another suggestion regarding the validation of DoE model (or any model in general) with confirmation runs, inspired by the latest "Statistically Speaking" featuring Dr. Nathaniel Stevens from University of Waterloo. He emphasizes and demonstrates the great possibilities behind statistical methods based on practical significance, and calculated by the probability of agreement : https://www.jmp.com/en_ca/events/statistically-speaking/on-demand/the-same-similar-or-different.html 

(At 28:20, the example is about RSM design validation with confirmatory runs).

 

I think this would be a better and more fair assessment framework to validate experiments, because in the example from Sam, due to the low sample size of confirmation runs, the confidence interval is large, so it's not very informative about the validity of the model/predictions and comparison between the two groups (it would also perhaps need a non-parametric test with no assumption on variance equality). But even if we may find a proper statistical test for the comparison, it will "only" check if there may be a statistically significant difference, not a practically one. In the "probability of agreement" framework, since the practical difference threshold for non-similarity is stated before doing the test (similarly to how we set up statistical test, by setting up the null and alternative hypothesis before doing the test), I find this comparison of much greater use and more relevant for domain experts. 

Please include agreement methodology in next version of JMP (it seems R packages are already available with this framework)

Publications from Dr. Nathaniel Stevens :

Publications | Nathaniel Stevens | University of Waterloo (uwaterloo.ca)