cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
gchesterton
Level IV

DOE: Should I bother with lots of replicates with only two factors?

Suppose I am conducting an experiment with only two 2-level categorical factors of interest. However, I have the budget for 24 runs. That's 6 reps of each design point of a full factorial. I really only need to detect the factors' (and interactions) effects, but I anticipate large experimental error due to the nature of the measures of human performance under investigation. Do the additional replicates significantly help my power to detect factor effects? Or would I be better off saving my money?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: DOE: Should I bother with lots of replicates with only two factors?

Your goal at this point seems to be estimation and testing of parameters over estimation of the response. Therefore your focus on evaluating the design should be estimation and power. The Design Evaluation outline in the design platform contains information to assess the performance of the design. The same information is available in the Compare Design platform when you are considering two or more candidate designs.

 

This page is a good place to start learning about what evaluations are available and how to use them.

View solution in original post

6 REPLIES 6
gchesterton
Level IV

Re: DOE: Should I bother with lots of replicates with only two factors?

More to the point...how is the best way to work with JMP to answer that question? I assume it's related to a power analysis, but I don't have a good sense for how much noise I should expect for a SNR value needed for a power analysis. I thought maybe someone has some intuition for this sort of question, but I'm happy to do a more formal analysis of power if that's really the only appropriate way to answer the original question.

Re: DOE: Should I bother with lots of replicates with only two factors?

Your goal at this point seems to be estimation and testing of parameters over estimation of the response. Therefore your focus on evaluating the design should be estimation and power. The Design Evaluation outline in the design platform contains information to assess the performance of the design. The same information is available in the Compare Design platform when you are considering two or more candidate designs.

 

This page is a good place to start learning about what evaluations are available and how to use them.

Re: DOE: Should I bother with lots of replicates with only two factors?

The power analysis demands an estimate of the absolute standard deviation of the response (Anticipated RMSE) and the magnitude of the effect size on the response (Anticipated Coefficient = effect / 2). Alternatively, you can leave the default value for the Anticipated RMSE (1) and then enter half the effect size for the Anticipated Coefficient relative to the standard deviation. For example, if you anticipate the effect of a term in the model to be at least 3 SD, then enter 1.5 for the anticipated coefficient.

 

Finally, if it is easier for you to define the effects from the result, you can enter the anticipated response for each run.

P_Bartell
Level VIII

Re: DOE: Should I bother with lots of replicates with only two factors?

Here's my take...others may feel differently...but I'm coming at my reply from the assertion in your original post that it's the measurement system, not variation in the factors as a result of experimental execution, that is contributing most of the noise. So rather than run replicates, can you repeatedly measure each treatment combination as a means to both quantify this noise source AND perhaps tease out factor effects? If the measurement system is destructive or you think there might be bias built in somehow by repeated measures then don't go down this road. Otherwise, save yourself the experimental resources, especially if repeated measures is cheaper/easier/more efficient, than true replicates. From there you are a few ways to handle the analysis...a repeated measures style analysis is one. Or some sort of transformation of each treatment combination's repeated measures... median, or go Olympic judging...throw out the high and low and then take the average, or some such idea.

 

Just a thought...

gchesterton
Level IV

Re: DOE: Should I bother with lots of replicates with only two factors?

Thanks Peter,
I can see why you offered that suggestion. However, the problem isn't so much that our measurement "tool" is imprecise. If it were, I could see the benefit of repeated measures. It's more a matter that the activity we're measuring varies quite a bit from run to run, even if all else were held constant. It's because our experiment is not as tightly controlled as I would like. If we could force a much more narrow set of actions that we're measuring, I could reduce the noise of course. But that results in a less realistic scenario. So, the more realistic scenario comes at the cost of reduced power.
statman
Super User

Re: DOE: Should I bother with lots of replicates with only two factors?

I'll add my thoughts beyond what has already been discussed. Realize that advice needs to be tailored to the situation, and I do NOT understand your specific situation. In experimentation, there are two sets of factors that you are interested in: The Design factors (the factors that you are manipulating and hoping will be causally related to the response variables) and the Noise.  Both sets of factors can affect the response variables.  There does seem to be a bias to focus on "optimizing" the design structure (e.g., optimal designs). There should be a increased focus on understanding the Noise.  There are many strategies for handling noise in an experiment situation (e.g., repeats, nesting, RCBD, BIB, split-plots, et. al.).  I think it worthwhile to spend some of your resources to gain an understanding of the noise while not negatively impacting the precision of the design.  That does not mean I would run a bunch of replicates (blocks).  In fact, I'm not sure I want to run more than 2 blocks, as I'm certainly not interested in a non-linear polynomial term for Block in a prediction equation.  Now you can exaggerate those Block effects (like bold level setting for factors) and get some ideas about the effect of noise and possibly estimate block-by-factor interactions.  Ultimately, you want your experiment to be as representative of Real conditions as possible.  Unfortunately this means it will likely be noisy, so partitioning the noise will help in expanding the inference space while not negatively affecting design precision.  So, I would save your money on the initial experiment, because you'll want to spend it on additional iterations.

"All models are wrong, some are useful" G.E.P. Box