cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
evtran
Level I

Losing Power and Prediction Variance in Custom DOE constraints

So I have to disallow certain combinations based on equipment capability, BUT it decreases the prediction variance and power significantly. Is there a way get around this? And if I can't, how will I be able to trust the results if power is low and the prediction variance high? 

evtran_1-1663202151213.png

evtran_2-1663202176346.png

evtran_3-1663202189089.png

 

 

2 ACCEPTED SOLUTIONS

Accepted Solutions
Victor_G
Super User

Re: Losing Power and Prediction Variance in Custom DOE constraints

Hello @evtran,

 

Reading your question and looking at the prediction variance profiler, we can indeed see that your design is not totally balanced, as the predicted relative variance at the lower levels of your factors (rocking temp, rocking speed) seem smaller than the one at the upper levels of your factors, so we can see some constraints. It can be quite difficult to help you without looking at the design itself, but here are some general remarks and advices for your questions related to power and prediction variance:

 

  • Power is meaningful when you know the relative size of the signal you want to detect and you have good estimates of the experimental and response measurement noise (RMSE) (see answer from @Phil_Kay here : Solved: Should I consider power analysis in DOE? - JMP User Community),
  • Depending on your objective (and design), you may not be interested about significance of terms, but more about predictive precision. In this case, you should look and compare different predicted relative variance (and Fraction of Design Space Plot) for different designs to choose the one that best minimize this relative predicted variance. 
  • It may not be very helpful to look at power and/or relative prediction variance, without comparing them with other designs. You can perhaps try to generate several designs with several models and optimality criterion.
    • Optimality criterion: If you want to highlight significant effects, a D-optimal (or A-optimal) design may be appropriate. If you care only about the response prediction, maybe an I-optimal design would better suits your needs.
    • Several models : when you design your model, you can try to modify the terms in the model. For example, you could perhaps only screen main effects, if you are at the beginning of your study (and later augment your design to look at interactions and maybe quadratic effects ?). Or you could enter interaction terms in the model (and quadratic effects ?) and change the estimability to "If Possible", to concentrate on the main effects but still be able to enter the other effects if you have enough degree of freedoms in your analysis.
    • Sample size : I don't know how much runs are considered in your concrete case here, but if experimental cost is not too much, increasing the sample size may help you a lot to gain more confidence in your screening ability (and help increase power and decrease relative prediction variance).

 

To be able to "trust" the result, you can compare what are the detected significant effects, compared to your domain expertise : does it make sense ? You can also prepare some extra runs (validation runs), that you will use to compare predictions with actual measurements (in order to check if the precision of your predictions is acceptable for your needs).

 

Sometimes, you have no other choices but to try a design because based on the experiments budget, your objectives and design constraints, no other design can be better than the one you have.

The only way to be sure that the design was "enough" for your needs and specific situation is to run the experiments, and try to analyze the results. If there are some difficulties, you will still have the possibility to augment your design.

 

Hope that this answer will help you,

Victor GUILLER
Scientific Expertise Engineer
L'Oréal - Data & Analytics

View solution in original post

pauldeen
Level VI

Re: Losing Power and Prediction Variance in Custom DOE constraints

Limiting the factor exploration space will always come at the expense of power. The way to get around it is to add more runs to the DOE.

View solution in original post

3 REPLIES 3
Victor_G
Super User

Re: Losing Power and Prediction Variance in Custom DOE constraints

Hello @evtran,

 

Reading your question and looking at the prediction variance profiler, we can indeed see that your design is not totally balanced, as the predicted relative variance at the lower levels of your factors (rocking temp, rocking speed) seem smaller than the one at the upper levels of your factors, so we can see some constraints. It can be quite difficult to help you without looking at the design itself, but here are some general remarks and advices for your questions related to power and prediction variance:

 

  • Power is meaningful when you know the relative size of the signal you want to detect and you have good estimates of the experimental and response measurement noise (RMSE) (see answer from @Phil_Kay here : Solved: Should I consider power analysis in DOE? - JMP User Community),
  • Depending on your objective (and design), you may not be interested about significance of terms, but more about predictive precision. In this case, you should look and compare different predicted relative variance (and Fraction of Design Space Plot) for different designs to choose the one that best minimize this relative predicted variance. 
  • It may not be very helpful to look at power and/or relative prediction variance, without comparing them with other designs. You can perhaps try to generate several designs with several models and optimality criterion.
    • Optimality criterion: If you want to highlight significant effects, a D-optimal (or A-optimal) design may be appropriate. If you care only about the response prediction, maybe an I-optimal design would better suits your needs.
    • Several models : when you design your model, you can try to modify the terms in the model. For example, you could perhaps only screen main effects, if you are at the beginning of your study (and later augment your design to look at interactions and maybe quadratic effects ?). Or you could enter interaction terms in the model (and quadratic effects ?) and change the estimability to "If Possible", to concentrate on the main effects but still be able to enter the other effects if you have enough degree of freedoms in your analysis.
    • Sample size : I don't know how much runs are considered in your concrete case here, but if experimental cost is not too much, increasing the sample size may help you a lot to gain more confidence in your screening ability (and help increase power and decrease relative prediction variance).

 

To be able to "trust" the result, you can compare what are the detected significant effects, compared to your domain expertise : does it make sense ? You can also prepare some extra runs (validation runs), that you will use to compare predictions with actual measurements (in order to check if the precision of your predictions is acceptable for your needs).

 

Sometimes, you have no other choices but to try a design because based on the experiments budget, your objectives and design constraints, no other design can be better than the one you have.

The only way to be sure that the design was "enough" for your needs and specific situation is to run the experiments, and try to analyze the results. If there are some difficulties, you will still have the possibility to augment your design.

 

Hope that this answer will help you,

Victor GUILLER
Scientific Expertise Engineer
L'Oréal - Data & Analytics
pauldeen
Level VI

Re: Losing Power and Prediction Variance in Custom DOE constraints

Limiting the factor exploration space will always come at the expense of power. The way to get around it is to add more runs to the DOE.

evtran
Level I

Re: Losing Power and Prediction Variance in Custom DOE constraints

Thank you both! The design I chose is just custom DOE with all interactions, except I removed the 4 runs that were not possible and added 2 centerpoints. Where I'm getting stuck is what parameters to do extra runs to decrease prediction variance if I can't test at those parameter extremes.