I generated a model from 20 data generated with a DoE.
I also run in parallel 20 runs designed with a LHS on the same design space.
I would like to compare the observed responses obtained with the LHS to the predicted responses obtained with the DoE.
Which kind of procedure would you suggest to validate/verify the initial DoE model ?
I used equivalence testing but it looks as a stringent test (I mean, a model seems to be highly performing to pass such a test).
Please note that I do not have JMP Pro,
It's not clear to me what an 'LHS' is...but I'd start simple with respect to comparing the results. I'll presume an 'LHS' is some sort of empirical investigation of the same treatment combination for treatment combination corresponding to the treatment combinations used to make the predictions across the original DOE's design space? If that's the case, I'd start simple. Maybe just a histogram of the difference between the predicted values and the 'LHS' values. Then maybe a scatter plot (Graph Builder or Fit Y by X) of the predicted vs. LHS values, with the predicted values on one axis, the LHS on the other.
You might use a matched pairs t-test for a significant difference between predicted response and observed response. That is, use the original model to predict the response for each run in the Latin hyper-square design. Your data table will use two data columns for this analysis. Enter them as predicted and observed in the Y role for Matched Pairs launch.
Just curious: what is the difference, if any, between the models obtained from the two DOEs for the same factors?
There are no labels assigned to this post.