Test-slice degrees of freedom and questionable results?
Oct 1, 2018 1:11 PM(1399 views)
Hello, I'm looking for a little back-ground on the test-slice procedure in JMP, and specifically how the DFs are calculated in this procedure. I've used test-slicing often, but recently noticed an isntance where I don't believe the procedure is evaluating correctly. I have not been able to find any information on the actual details of the test-slice procedure online or in JMP help info.
I am running a mixed-effects model with fixed factors A and B[A] (i.e., factor B is nested in factor A) and three random effects. There are ~8,000 observations, 6 levels of A, and between 2 and 17 levels of B in each level of A. When examining the test-slices for factor A (to assess whether factor B differs within the levels of A) two of the levels have highly significant test-slice results but esentially no difference in the actual LS-means of the B levels. These two levels also have much larger DenDF then I'd anticipate for this contrast (~2,000), despite only having ~500 observations in each A level. I'm not sure if it is coincidence, but both levels of A with questionable results only have 2 levels of B whereas all the other A levels have 5+ levels of A. If I subset the data and test the difference between B levels in each A level separately there are clearly no significant differences between B levels (p = 0.978 and p = 0.886, for the two A levels, respectively).
So my questions are: 1) how are the test-slice DFs caclualted in JMP and how can I independently verify them; and 2) what types of data/model issues could lead to such mis-calculation of test-statistics, and 3) are there suggestions for alternative analysis strategies of model specification to test this nested effect?
Thanks for the quick reply. This helps me a little, but unfortunately, I don't understand exactly what the K-R adjustment is doing or why it is having such a large effect on the two levels of A but not the other levels of A. Since these two levels of A are underrepresented compared to the remaining levels of A, I'd have thought the test results would be *less* likely to be significant? Is the K-R adjustment not appropriate in this case or are the results I'm getting likely correct?
The reference in JMP Help contains the literature reference to the paper for these adjustments. I honestly can't say that I would expect the adjustment to go one way or another.
The adjustments are appropriate and without them, the degrees of freedom by expected mean squares will not provide the proper distribution for the null hypothesis, so the p-values for any tests will be worthless.