Hi,
I think you mix things up.
Both FDS plots and power analysis tell you something about how good your DOE is, but they measure completely different qualities:
- FDS Plot → How precisely can your model make predictions across the design space? Or how reliable are the predictions over most of the region
- Power Analysis → How likely is your experiment to detect real effects given noise and sample size? Or can we reliably detect meaningful differences
They are related because both help you decide whether your design is adequate, but they operate on different levels of the DOE.
- Power = 0.65 means an 65% chance of detecting an effect of chosen size.
- If your FDS curve is high (e.g., 80% of space < variance threshold), your design gives reliable predictions over most of the region
Power analysis focuses on hypothesis testing, not prediction precision.
Here are some examples to hopefully better understand:
Similarities: Both depend on the design’s variance structure. FDS → Prediction variance, while Power → Effect variance. Both are driven by the same core element: the design matrix and the noise level.
Improving one often improves the other
For example:
- Adding replicates reduces error variance
- increases power
- reduces prediction variance → improves FDS
- Switching to a more optimal design (e.g., I‑optimal)
- improves prediction precision
- Often increases effect detectability (power)
But they optimize different goals
Sometimes one improves while the other does not:
- A D‑optimal design might maximize parameter precision but produce poorer average prediction variance (worse FDS).
- A design that spreads points out (good for power) might leave prediction gaps in the design space (worse FDS).
Hope that helps to explain why this might happen.
/****NeverStopLearning****/