So I have to disallow certain combinations based on equipment capability, BUT it decreases the prediction variance and power significantly. Is there a way get around this? And if I can't, how will I be able to trust the results if power is low and the prediction variance high?
Hello @evtran,
Reading your question and looking at the prediction variance profiler, we can indeed see that your design is not totally balanced, as the predicted relative variance at the lower levels of your factors (rocking temp, rocking speed) seem smaller than the one at the upper levels of your factors, so we can see some constraints. It can be quite difficult to help you without looking at the design itself, but here are some general remarks and advices for your questions related to power and prediction variance:
To be able to "trust" the result, you can compare what are the detected significant effects, compared to your domain expertise : does it make sense ? You can also prepare some extra runs (validation runs), that you will use to compare predictions with actual measurements (in order to check if the precision of your predictions is acceptable for your needs).
Sometimes, you have no other choices but to try a design because based on the experiments budget, your objectives and design constraints, no other design can be better than the one you have.
The only way to be sure that the design was "enough" for your needs and specific situation is to run the experiments, and try to analyze the results. If there are some difficulties, you will still have the possibility to augment your design.
Hope that this answer will help you,
Limiting the factor exploration space will always come at the expense of power. The way to get around it is to add more runs to the DOE.
Hello @evtran,
Reading your question and looking at the prediction variance profiler, we can indeed see that your design is not totally balanced, as the predicted relative variance at the lower levels of your factors (rocking temp, rocking speed) seem smaller than the one at the upper levels of your factors, so we can see some constraints. It can be quite difficult to help you without looking at the design itself, but here are some general remarks and advices for your questions related to power and prediction variance:
To be able to "trust" the result, you can compare what are the detected significant effects, compared to your domain expertise : does it make sense ? You can also prepare some extra runs (validation runs), that you will use to compare predictions with actual measurements (in order to check if the precision of your predictions is acceptable for your needs).
Sometimes, you have no other choices but to try a design because based on the experiments budget, your objectives and design constraints, no other design can be better than the one you have.
The only way to be sure that the design was "enough" for your needs and specific situation is to run the experiments, and try to analyze the results. If there are some difficulties, you will still have the possibility to augment your design.
Hope that this answer will help you,
Limiting the factor exploration space will always come at the expense of power. The way to get around it is to add more runs to the DOE.
Thank you both! The design I chose is just custom DOE with all interactions, except I removed the 4 runs that were not possible and added 2 centerpoints. Where I'm getting stuck is what parameters to do extra runs to decrease prediction variance if I can't test at those parameter extremes.