Hi @Mathej01,
I think the explanation from JMP Help on Relative Prediction Variance is quite easy to understand :
"For given settings of the factors, the prediction variance is the product of the error variance and a quantity that depends on the design and the factor settings. Before you run your experiment, the error variance is unknown, so the prediction variance is also unknown. However, the ratio of the prediction variance to the error variance is not a function of the error variance. This ratio, called the relative prediction variance, depends only on the design and the factor settings. Consequently, the relative variance of prediction can be calculated before acquiring the data."
"After you run your experiment and fit a least squares model, you can estimate the error variance using the mean squared error (MSE) of the model fit. You can estimate the actual variance of prediction at any setting by multiplying the relative variance of prediction at that setting."
In shorter words, the relative prediction variance displayed when creating your design is only dependent on your design choice and construction. It helps you knowing in advance the strengths and weaknesses of your design, prior conducting any experiments. It helps you figuring out where the maximum variance in your predictions will be located.
To calculate the variance of prediction at any points of your design, multiply the MSE of your model by the relative prediction variance at this point.
I hope this answer will help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)