Hi,
I've just set up a series of mixed models using the Fit Model platform using one of the JMP demo data sets ("Blood Pressure", in which I've set Doses as a fixed effect and Subjects as a random effect). To analyse those models I have a choice of methodologies: EMS or REML (recommended), so I tried them both, added a pairwise comparison of the doses (the "LS Means Student's t" option on the "Effect Details" panel, showing the Connecting Letters report only), and compared the outputs. In the resulting output the Dose F ratio, P value, least-squares means and Connecting Letters report were all identical; however the standard errors for the least-squares means were different in all cases.
This wouldn't surprise me - after all, different methodologies often do deliver different results - except that if the standard errors are different, I would have expected the Connecting Letters reports also to have produced different results - because the pairwise comparisons are (presumably) based on the same calculations that generated the standard errors.
How is this possible? It's worth adding that although the standard errors here while different are still quite similar, the question originally came to light when I was analysing a far larger data set in which the reported standard errors from one analysis were about three times those of the other - and yet all the Connecting Letters reports were again identical.
I'm running JMP 11, and attach a copy of the Blood Pressure data set with both analyses appended to it for anyone who would like to check my calculations. Have I done something silly here?
Many thanks.