I've just set up a series of mixed models using the Fit Model platform using one of the JMP demo data sets ("Blood Pressure", in which I've set Doses as a fixed effect and Subjects as a random effect). To analyse those models I have a choice of methodologies: EMS or REML (recommended), so I tried them both, added a pairwise comparison of the doses (the "LS Means Student's t" option on the "Effect Details" panel, showing the Connecting Letters report only), and compared the outputs. In the resulting output the Dose F ratio, P value, least-squares means and Connecting Letters report were all identical; however the standard errors for the least-squares means were different in all cases.
This wouldn't surprise me - after all, different methodologies often do deliver different results - except that if the standard errors are different, I would have expected the Connecting Letters reports also to have produced different results - because the pairwise comparisons are (presumably) based on the same calculations that generated the standard errors.
How is this possible? It's worth adding that although the standard errors here while different are still quite similar, the question originally came to light when I was analysing a far larger data set in which the reported standard errors from one analysis were about three times those of the other - and yet all the Connecting Letters reports were again identical.
I'm running JMP 11, and attach a copy of the Blood Pressure data set with both analyses appended to it for anyone who would like to check my calculations. Have I done something silly here?
The reason that the standard errors are different between the REML and the EMS LS Means standard errors is because REML is using the following calculation for the LS Means standard errors:
where L is the contrast vector for the LS Means, X is the design matrix, and V is the variance matrix.
EMS is using the ordinary least squares way of calculating the LS Means standard errors which do not include the variance matrix, V,
EMS LS Means standard error formula:
Thus EMS, is basically treating all effects as fixed in these calculations.
I would definitely recommend using REML over EMS.
I hope this helps.
Hi - many thanks: I'll read up on that. Can you think of a reason why the connecting letters reports for both methods would deliver identical results, however? If I were using a package that didn't provide this particular functionality, I'd probably calculate such a summary myself directly from the table of means and standard errors - in which case I'd presumably get different results from those that JMP delivers for the REML methodology. I'd therefore like to understand this better.
Thanks for your time on this: it's really helpful.