cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
The Discovery Summit 2025 Call for Content is open! Submit an abstract today to present at our premier analytics conference.
Get the free JMP Student Edition for qualified students and instructors at degree granting institutions.
Choose Language Hide Translation Bar
View Original Published Thread

Differences in GRR results between JMP and MINTAB

Xinghua
Level III

This problem has caused a long-term confusion and has not been fundamentally solved.


0. The results of the variance analysis are exactly the same, except that the results of the variance components are different. However, MINITAB will perform two variance analyses (with and without interaction).

01.jpg


1. The Minitab's help has the following frmulas for the calculation method of variance components, but it does not explain what method it is, EMS or REML. According to my verification, when a negative value appears, it will be forced to 0. The JMP's help does not explain the specific calculation method of variance components.

 

02.jpg
2. In JMP, there will be two variance component results as shown in the figure below, and the results are also different. I don’t know why.

 

03.jpg


3. One thing is certain, the results of MINITAB are consistent with those of the MSA manual.

12 REPLIES 12
Xinghua
Level III


Re: Differences in GRR results between JMP and MINTAB

Thank you very much.

The Wheeler method you mentioned, do you mean EMP?

Victor_G
Super User


Re: Differences in GRR results between JMP and MINTAB

Yes, and I'm specifically mentioning the Monitor Classification Legend, which is a part of the EMP Results Report

It offers a summarized way to show, compare and rank several equipment based on their ability to detect the strength of a signal.

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
awelsh
Level III


Re: Differences in GRR results between JMP and MINTAB

There's multiple ways to approach any problem. Ask 10 different people and you'll get 10 different answers, none of them wrong. (I suppose unless the person you're asking is just uninformed.)

 

For cases like this you need to use the method your customer is asking you to use. Simple as that. If they want MSA manual method give that to them. If they want Minitab give that. If they want JMP give that. If they have some excel template just use that. Etc.

 

Now for your own internal approval of measurement systems where you get to set the method then just choose the one your most familiar with. If a measurement system is good it should pass all methods.

 

Personally I like Wheeler's approaches and a control chart way to assess measurement systems. It's straight forward and easy to understand and apply. All these enumerative statistics create confusion and wasted debate as you've demonstrated.

 

Good luck, I love the community here and it seems some good detective work has been completed to compare all the methods. A significant interaction effect means the MSA fails anyway. We would never want the measurement of the part to depend on who took the measurement. So if the difference between JMP and Minitab is only caused with a significant interaction effect, then it doesn't matter what the p-value or sum of squares numbers are. MSA fails, end of story. Do some improvements to the method so we get the same results from all operators and then redo the MSA to confirm it now passes without a significant interaction.