And to finish the explanation and to emphasize the importance of relative comparison of designs mentioned by @Phil_Kay :
"Somewhere in that post, it compares the relative prediction variance of both designs and states: "...This means that the relative standard error is 0.732 for the D-optimal design and 0.456 for the I-optimal design..." furthermore, it finishes saying something like "...confidence intervals for the expected response based on the D-optimal design are about 60% wider..."
Since it states confidence intervals for D-optimal design are about 60% wider compared to I-optimal design, let's figure out from where it comes with the equations I mentioned before with a short calculation (sorry for the formatting):
- Width CI [D-opt] / Width CI [I-opt] = Predicted StdError (D-opt) / Predicted StdError (I-opt)
- = Square_root(Predicted variance [D-opt]) / Square_root(Predicted variance I-opt)
- = Square_root(Relative Predictive Variance [D-opt] x MSE [D-opt]) / Square_root(Relative Predictive Variance [I-opt] x MSE [I-opt])
At this stage, we have supposed that degree of freedoms and terms in both designs are the same, and we suppose that the Mean Square of Error for both models will be equivalent, so MSE [D-opt] = MSE [I-opt].
We now have :
- = Square_root(Relative Predictive Variance [D-opt]) / Square_root(Relative Predictive Variance [I-opt])
If we replace these terms by their values in the case from JMP help :
- = Square_root(0,53562) / Square_root(0,20833) = 0,732 / 0,456 = 1,60
So dividing the width of confidence interval for D-optimal design by the width of confidence interval for I-optimal design gives us 1,60 which means that we found effectively that confidence intervals for the expected response based on the D-optimal design are about 60% wider than those for I-optimal design (if we assume equivalent models for both designs, meaning same number of terms, same degree of freedoms and same MSE for D and I-optimal models).
Hope this finally clarifies the sentence you mentioned
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)