This problem has caused a long-term confusion and has not been fundamentally solved.
0. The results of the variance analysis are exactly the same, except that the results of the variance components are different. However, MINITAB will perform two variance analyses (with and without interaction).
1. The Minitab's help has the following frmulas for the calculation method of variance components, but it does not explain what method it is, EMS or REML. According to my verification, when a negative value appears, it will be forced to 0. The JMP's help does not explain the specific calculation method of variance components.
2. In JMP, there will be two variance component results as shown in the figure below, and the results are also different. I don’t know why.
3. One thing is certain, the results of MINITAB are consistent with those of the MSA manual.
Hi @Xinghua : I'm not so sure I'd label these kinds of things as "mistakes". While the AIAG MSA manual is a common reference for these kinds of problems, it should not be viewed as correct to the exclusion of other methods and other ways of approaching the problem. And a case could be made either way whether to include the interaction, or not, if it is not "significant".
That said, I also recognize your dilemma; how do you respond to customers when they ask about inconsistent results? The EMS method is an older method than REML etc; it can be implemented without any sort of sophisticated numerical algorithms. i.e., there are formulas. It is not the "best" method nor is it the only method. It is, however, simple and fit-for-purpose (hence its appeal in such manuals). To include or not include the interaction term? Good question. I don't want to venture too far down that rabbit hole. But the case to not include it is straight-forward; if the interaction effect is negligible (as could be inferred from being non-significant), why include it? On the other hand, if we think of variance components analysis as a estimation problem (rather than a testing for significance problem), then a case can be made for including the interaction (regardless of significance).
There's multiple ways to approach any problem. Ask 10 different people and you'll get 10 different answers, none of them wrong. (I suppose unless the person you're asking is just uninformed.)
For cases like this you need to use the method your customer is asking you to use. Simple as that. If they want MSA manual method give that to them. If they want Minitab give that. If they want JMP give that. If they have some excel template just use that. Etc.
Now for your own internal approval of measurement systems where you get to set the method then just choose the one your most familiar with. If a measurement system is good it should pass all methods.
Personally I like Wheeler's approaches and a control chart way to assess measurement systems. It's straight forward and easy to understand and apply. All these enumerative statistics create confusion and wasted debate as you've demonstrated.
Good luck, I love the community here and it seems some good detective work has been completed to compare all the methods. A significant interaction effect means the MSA fails anyway. We would never want the measurement of the part to depend on who took the measurement. So if the difference between JMP and Minitab is only caused with a significant interaction effect, then it doesn't matter what the p-value or sum of squares numbers are. MSA fails, end of story. Do some improvements to the method so we get the same results from all operators and then redo the MSA to confirm it now passes without a significant interaction.
Sorry to chime in, but I wanted to provide some comments about the discussion here :
These points won't answer your direct questions, but I found them pragmatic and rational when dealing with MSA studies. I'm sure @statman would also have a lot to say.
Have you read through JMP documentation?
There are also community posts about this topic:
You can also force JMP to have similar results as Minitab but you have to partially script it yourself (force EMS to be used, dropping interaction...).
Thank you, I have read these.
I know there are three methods: EMS, REML and Bayes. According to the help of JMP, if the data is balanced and there are no negative values, the EMS method will be used. But it still cannot get the same result as MINITAB. You can try it.
Hi @Xinghua : not sure if I'm answering your question.
1. The formulae you see from Minitab is EMS. And there is no closed form formulae for REML; the REML solution is found numerically.
2. The Var Comp for Gage R&R section in JMP is based on the sum of Variance Components in the Variance Components in the section above.
Repeatability = Within
Reproducibility = Inspector + Inspector*Samples
Part to Part = Samples
And Gage R&R = Repeatability + Reproducibility
How are the Var Comps in Minitab different than JMP?
Thank you very much. I am not a professional statistician. I just want to know how to answer customers when they ask about inconsistent results. The algorithm of MINITAB is consistent with the AIAG MSA manual.
In addition, according to the JMP help description, when the data is balanced and the calculated variance components are not negative, the EMS method will be used. But according to my verification, the results of MINITAB and JMP are always different. Generally, we do GRR with 3 people * 3 times * 10 samples.
Hi @Xinghua : If I use the "2 Factors Crossed" data set in the Variability Data folder in the JMP Sample Data Folder, JMP gives the exact same results as Minitab.
Thank you. I checked it out, and they are exactly the same as you said. I noticed that its interaction effect is significant (inspectors * parts). In actual process, I have never seen significant in interaction effect.
This may be the root cause of the difference, because if the interaction effect is not significant in MINITAB, there will be another type calculation methods. I will continue to check it.
I verified it again. Its interaction effect is not significant.
After forcing the interaction effect to be included in the variance component (even if the interaction effect is not significant in the ANOVA), the results of JMP and EXCEL are exactly the same.
This may be a mistake of MINITAB, because the AIAG MSA manual does not tion "When the interaction effect is not significant, how to...".
Hi @Xinghua : I'm not so sure I'd label these kinds of things as "mistakes". While the AIAG MSA manual is a common reference for these kinds of problems, it should not be viewed as correct to the exclusion of other methods and other ways of approaching the problem. And a case could be made either way whether to include the interaction, or not, if it is not "significant".
That said, I also recognize your dilemma; how do you respond to customers when they ask about inconsistent results? The EMS method is an older method than REML etc; it can be implemented without any sort of sophisticated numerical algorithms. i.e., there are formulas. It is not the "best" method nor is it the only method. It is, however, simple and fit-for-purpose (hence its appeal in such manuals). To include or not include the interaction term? Good question. I don't want to venture too far down that rabbit hole. But the case to not include it is straight-forward; if the interaction effect is negligible (as could be inferred from being non-significant), why include it? On the other hand, if we think of variance components analysis as a estimation problem (rather than a testing for significance problem), then a case can be made for including the interaction (regardless of significance).
Sorry to chime in, but I wanted to provide some comments about the discussion here :
These points won't answer your direct questions, but I found them pragmatic and rational when dealing with MSA studies. I'm sure @statman would also have a lot to say.