I think the key to getting two of the As in the trials is including in the model the 3 interaction terms among the As.
Here's my resulting design with the 10 term (vs. 7 term) model showing lots of pairwise combinations and no three-way combinations of As.
I hope this delevers what you want. Thanks for the good discussion.
Thank you again for your reply.
I did include the interaction between the A components and I get something similar to your design. I attach the views below.
Would you say that this is a good "design" or do you think as well that we should make more effort to enlarge the ranges for the factors to have a more robust experimental program?
Happy to see you can produce a similar design satisfying the desire to have binary blends of the As.
As for is it a good design? You never stated and I never asked what is(are) your goal(s)? If it is to be able to optimize the process, then why not include all interactions to make a potentially more predictive model? If it is to do screening to find out better performing levels or whether a component makes any difference over the proposed ranges, then I would say "yes, it is good." As you have already discussed with Mark, expanding ranges (being bold) helps to increase the size of the effect relative to the noise in the process and therefore increases power. Good luck experimenting.
I often find that experimenters limit factor ranges for no good reason. The choice is guided by what they think they know and not by the requirements of the regression analysis ahead. "I think that the best level is 12% so I will test 10% to 15%." Not a good idea. That idea comes from a testing mentality, not an experimenting mentality. The range should always be as wide as is realistically possible in order to produce the largest effects. (I am fully aware that there are often physical limitations on the ranges. That limitation is not what I am talking about.) That way will maximize power (without necessarilky increasing the number of runs), minimize the relative standard error of parameter estimates and predicted response, and narrow the confidence intervals. Most informative.
Why narrow factor ranges? Why not widen them? This lesson is stubbornly ignored. It is a mentality thing.
(Note that this is my personal opinion.)
Power is the probability that you will decide that a real effect is significant or that you won't make a type II error. That is, the alternative hypothesis is true (there is a real effect). For example, in a regression analysis such as the one used to fit the linear model to the experimental data, the null hypothesis is that a parameter is 0 (no effect). The alternative hypothesis is that a parameter is not 0. We could use a t-test to decide. The t-ratio is the (estimate - hypothesized value) / (standard error of the estimate). By widening the factor range, you produce a larger effect (and a larger estimate). That change, in turn, produces a larger numerator in the t-ratio. A larger t-ratio will have a smaller p-value. You are more likely to decide that the real effect is significant.
On the other hand, if you arbitrarily narrow the factor range, the change is in the opposite direction. You will produce a smaller effect that leads to a smaller numerator and, therefore, a smaller t-ratio with a higher p-value. Now it is more likely that the decision will be that the effect is not significant.
Caution: even if you can get them to widen the range now, once they know where the factor should be set, the mentality will return and they will want to narrow the range when this factor is in a future experiment. You never narrow the factor range.
There are no labels assigned to this post.