cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
] />

Discussions

Solve problems, and share tips and tricks with other JMP users.
Choose Language Hide Translation Bar
Coverbird30
Level II

Good fraction of space, but low power of DoE

Hello JMP community,

I am creating a DoE design with 2 continuous, 1 categorical (6 levels) and 1 categorical (2 levels). When I create a D-optimal design for this study, I see that the power for detecting the interactions that I am interested is quite low (Anticipated coefficient = 5, Power =0.61). However, the fraction of design plot looks quite good. 95% of the fraction of space lies below the prediction variance of 1.0.

Can you please explain why the fraction of design plot is good while the power of the model is quite low?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Good fraction of space, but low power of DoE

Hi, 

I think you mix things up. 

Both FDS plots and power analysis tell you something about how good your DOE is, but they measure completely different qualities:

  • FDS Plot → How precisely can your model make predictions across the design space? Or how reliable are the predictions over most of the region
  • Power Analysis → How likely is your experiment to detect real effects given noise and sample size? Or can we reliably detect meaningful differences

They are related because both help you decide whether your design is adequate, but they operate on different levels of the DOE.

  • Power = 0.65 means an 65% chance of detecting an effect of chosen size.
  • If your FDS curve is high (e.g., 80% of space < variance threshold), your design gives reliable predictions over most of the region

Power analysis focuses on hypothesis testing, not prediction precision.

Here are some examples to hopefully better understand:

Similarities: Both depend on the design’s variance structure. FDS → Prediction variance, while Power → Effect variance. Both are driven by the same core element: the design matrix and the noise level.

 Improving one often improves the other

For example:

  • Adding replicates reduces error variance
    • increases power
    • reduces prediction variance → improves FDS
  • Switching to a more optimal design (e.g., I‑optimal)
    • improves prediction precision
    • Often increases effect detectability (power)

But they optimize different goals

Sometimes one improves while the other does not:

  • A D‑optimal design might maximize parameter precision but produce poorer average prediction variance (worse FDS).
  • A design that spreads points out (good for power) might leave prediction gaps in the design space (worse FDS).

 

Hope that helps to explain why this might happen. 

 

 

/****NeverStopLearning****/

View solution in original post

2 REPLIES 2

Re: Good fraction of space, but low power of DoE

Important considerations for the power are the number of runs and the actual design. You did not state either, so I created a design for the same factors and model using the default number of runs (25). The power in this case is better than what you showed while using the same anticipated coefficients.

Screenshot 2026-01-23 102701.png

I am not sure what you expected for the relative prediction variance in this case.

Re: Good fraction of space, but low power of DoE

Hi, 

I think you mix things up. 

Both FDS plots and power analysis tell you something about how good your DOE is, but they measure completely different qualities:

  • FDS Plot → How precisely can your model make predictions across the design space? Or how reliable are the predictions over most of the region
  • Power Analysis → How likely is your experiment to detect real effects given noise and sample size? Or can we reliably detect meaningful differences

They are related because both help you decide whether your design is adequate, but they operate on different levels of the DOE.

  • Power = 0.65 means an 65% chance of detecting an effect of chosen size.
  • If your FDS curve is high (e.g., 80% of space < variance threshold), your design gives reliable predictions over most of the region

Power analysis focuses on hypothesis testing, not prediction precision.

Here are some examples to hopefully better understand:

Similarities: Both depend on the design’s variance structure. FDS → Prediction variance, while Power → Effect variance. Both are driven by the same core element: the design matrix and the noise level.

 Improving one often improves the other

For example:

  • Adding replicates reduces error variance
    • increases power
    • reduces prediction variance → improves FDS
  • Switching to a more optimal design (e.g., I‑optimal)
    • improves prediction precision
    • Often increases effect detectability (power)

But they optimize different goals

Sometimes one improves while the other does not:

  • A D‑optimal design might maximize parameter precision but produce poorer average prediction variance (worse FDS).
  • A design that spreads points out (good for power) might leave prediction gaps in the design space (worse FDS).

 

Hope that helps to explain why this might happen. 

 

 

/****NeverStopLearning****/

Recommended Articles