cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Discussions

Solve problems, and share tips and tricks with other JMP users.
Choose Language Hide Translation Bar

Lack of fit test

Hello everyone. I have a question related to DOE.
When conducting a Lack of fit test, how should the F ratio be calculated? F=MS residual /MS pure eror  or 

F=MS lack of fit/MS pure eror. The idea is to compare the model error with the experimental error.. 

Are the results obtained from the two formulas different? Which result is more reliable and when is it necessary to use each of the two formulas?

The first formula gives a lower value of f compared to the second formula for the same data.

I would be grateful if someone could clarify this issue.

Best regards, Milen

3 REPLIES 3
Victor_G
Super User

Re: Lack of fit test

Hi @DiscreteIbex427,

Welcome in the Community!

The F Ratio used in the Lack of Fit test is the ratio of the Mean Square for Lack of Fit to the Mean Square for Pure Error, so your second option, see JMP Help.

The residuals of a model is the sum of the model lack of fit (underfitting) and pure error (estimated through replicates), hence the choice of the second formula, since you want to compare the two sources of error between each others. I don't know when or if the first formula may be used, but at least not for the lack of fit context. Since residuals = lack of fit + pure error, I would expect the first formula to give higher values, not lower ones.

Hope this answer will bring some clarification,

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)

Re: Lack of fit test

Hello, Victor, thank you for your time and comprehensive answer.
The reason for asking this question are two sources:
- the book by Vuchkov and Boyadzhieva, where this issue is discussed in Chapter 2, points 2.3.4, page 56;
- the article by Atasoy, where he discusses an alternative method for calculating the F ratio when applying the Lack of fit test.
My idea is that if anyone has encountered a similar case, to provide an explanation of when to use the two formulas and which one gives more reliable results.
I am attaching the two cited sources.
Once again, thank you for the answer.

Best regards, Milen

statman
Super User

Re: Lack of fit test

I haven't consumed the attached papers in depth, but what sticks out quickly in the Atasoy paper, unless I am misinterpreting his words, is the use of repeated measures as estimates of experimental error? IMHO, this is incorrect way to estimate the random errors (or the MSE). The estimate should be by randomized replicates not repeated measures. Repeated measures are within treatment estimates of variation and the MSE should be estimated by the errors between treatments.

Also, if you could provide context for your question, we might be able to interpret what you are asking. For example, was the estimate of "lack of fit" due to model reduction or assignable model terms inadvertently left out? 

"All models are wrong, some are useful" G.E.P. Box

Recommended Articles