cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Browse apps to extend the software in the new JMP Marketplace
Choose Language Hide Translation Bar
lazzybug
Level III

How DoE model evaluation impact final model after experiment?

Hi Folks,

 

I have several questions about DoE, could you please answer them? Thank you so much.

 

1) Due to experiment constraint, can I manually adjust center point value for example (-1)= 10, (+1)= 12, (0) = 11. But (0) = 11 is not a good center point, can I change to 10.5 in the design? How this change will impact the data analysis and final model as well as model prediction? what if I used (0)= 11.5 in the next experiment while keep (-1)= 10, (+1)= 12?

2) How the values in the model evaluation such as power analysis, confound, predicted variance impact the final model once I collect all the experimental data? Let's the power for one interaction is only = 0.2 vs another design power = 0.8, what's the potential outcome will be? 

 

Thank you so much again.

1 ACCEPTED SOLUTION

Accepted Solutions
Byron_JMP
Staff

Re: How DoE model evaluation impact final model after experiment?

Hello,

If you need to mess with the setting of the "0" level its OK. Sometimes the middle between -1 and 1 just doesn't practically work well.

Changing the location of the middle point might mess with variance prediction in the design diagnostics, but the analysis, not so much.

 

When you click the "Model" script in the DOE design table to run the model, go to the top left red triangle menu in the model launch dialog and turn off "Centered Polynomials".

 

Power analysis depends on assumptions that most people just don't have enough information about to make very accurate estimates, if you're using a default design, its going to be a pretty good deign.    To get a better idea of your power, you have to do the hard thought experiment. The bottom half of the design eval menu induces a section where you have to guess what the results of your experiment will be. When you update, the anticipated coefficients, and RMSE are updated based on the guesses. Now the power is based on how much error might be there and what you coefficients might be.  

JMP Systems Engineer, Health and Life Sciences (Pharma)

View solution in original post

4 REPLIES 4
P_Bartell
Level VIII

Re: How DoE model evaluation impact final model after experiment?

Here's my take on your questions:

 

You didn't articulate the specific type of design, full/fractional factorial plus center points, ?-optimal, Taguchi (hope not), or some other design so my answers might change a bit based on your specific design...but generally here goes: 

1) You can put the 'center point' any place you like. How will it impact the analysis my guess, and it's just a guess, is 'not much'. How's that for specificity? You will lose a bit of design space balance...but that's not usually a deal breaker. The final model is dictated more by the specific design than anything else...but generally speaking, where you put a center point doesn't have a huge impact on final model structure and estimable terms.

 

2) Power, confounding, predicted variance are established before you collect data. The act of collecting the data, that is executing the experiment does not affect power, confounding etc. 

lazzybug
Level III

Re: How DoE model evaluation impact final model after experiment?

1. This is a custom design. The reason we want to avoid the exact center because this is a duration experiment, we want to finish the work on the working hours from 8:00-5:00pm. Because this is a very complicated experiment, we cannot control the experiment at the same time every day, that's why I wonder if we can adjust the center point to fit our working hours?

2. I knew the power analysis, cofounding, predicted variance is before data collection. I want to know how this design impact the final model. Let's say the power for one of variable curvature in one design is 0.2, another design is 0.8. The final found this curvature is significant, does this mean second design has narrow confident interval due to its high power in the design, while power =0.2 has a very broad confident interval? I want to know how the design evaluation impact the final data analysis. Of course, the final data won't change anything for the model evaluation.

statman
Super User

Re: How DoE model evaluation impact final model after experiment?

Some of my thoughts to your questions:

1. I'm a bit confused...if you can do 10 and 12 why not do 11?  You're not trying to "pick a winner" but create a DF to estimate curvature over the design space.  "Good" has nothing to do with it (and how do you know 11 is not "good"?). The less centered the center point, the more biased the estimate (the less rotatable the design). Of course, you can study any part of the space you'd like to, just realize you are making compromises and potentially biasing the results.  Be careful of interpretation of the analysis.  Why not do 10.5, 11.5 and 12.5?

2. I'm also unsure of your 2nd question and you may not find my comments useful. You will need make decisions regarding experiment design in the planning stage. For example: what effects will be separated, which ones will be confounded or partially confounded, how much practically significant amount of variation needs to be created, what is the current process variation and how stable is it, how will noise be handled (e.g., measurement error, raw material variations, ambient conditions, etc) and, of course, sample size (I'll leave the criteria for this to others).  Many of the questions you need to answer in planning stage are currently unknown, and will likely be estimated.  All you can do is due diligence.  I highly recommend the practice of developing multiple DOE plans and comparing and contrasting them, evaluating the potential for knowledge gain vs. the resources required to execute.  Predict every possible outcome and what you will do in each situation...then run it, analyze it and compare the results with predictions.

"All models are wrong, some are useful" G.E.P. Box
Byron_JMP
Staff

Re: How DoE model evaluation impact final model after experiment?

Hello,

If you need to mess with the setting of the "0" level its OK. Sometimes the middle between -1 and 1 just doesn't practically work well.

Changing the location of the middle point might mess with variance prediction in the design diagnostics, but the analysis, not so much.

 

When you click the "Model" script in the DOE design table to run the model, go to the top left red triangle menu in the model launch dialog and turn off "Centered Polynomials".

 

Power analysis depends on assumptions that most people just don't have enough information about to make very accurate estimates, if you're using a default design, its going to be a pretty good deign.    To get a better idea of your power, you have to do the hard thought experiment. The bottom half of the design eval menu induces a section where you have to guess what the results of your experiment will be. When you update, the anticipated coefficients, and RMSE are updated based on the guesses. Now the power is based on how much error might be there and what you coefficients might be.  

JMP Systems Engineer, Health and Life Sciences (Pharma)