turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- JMP Ordinal regression output

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Aug 3, 2013 1:27 PM
(843 views)

Hello all

I am trying to fit a model using JMP 10, and the results confuse me, I thought perhaps you could assist.

I have an ordinal dependent variable, with 3 categories. My main independent variable of interest is discrete, with not less than 5 categories ! In addition to this IV, I have several other IV's to examine, some are discrete and some continuous.

I ran the model using the Fit model module. First, I tested only the main IV of interest, and something weird has happened. JMP have created dummy variables, leaving the last category as control, fine. The model was not significant, with P=0.1134 (whole model test), however, one dummy variable was actually significant (P=0.036). The generalized R square was 0.01, the entropy R square was 0.009, and the training misclassification rate was 35% (so any testing set would be poorer). Every indication show that the results are week, but how can I explain a whole model being non-significant while one IV (in this case a level of dummy), being significant ? There isn't a correlation, since all IV's are levels of the same dummy !

Then I tried adding a couple of continuous IV's. Both were very significant (P=0.0015, P=0.0126). However, the generalized R square is still around 5% only. The odds ratios are 1.16 and 0.9. When I have changed the roll play, and created a couple of box-plots of these IV's group by my DV, it didn't take an expert to say that there is not an interesting difference. My sample size is 411, can it be that the large sample size caused the low P-values, or is it a type I error ? How can I explain very significant results when I get the sense that they have no meaning ?

Thank you !

1 REPLY

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Jun 19, 2014 7:20 AM
(340 views)

What was "*weird*" about your initial impression of the first model?

Is your first predictor nominal or ordinal?

The whole model and and the individual effect tests are not the same as the tests for individual parameter estimates. It is possible that there is a significant parameter (i.e., estimate different from 0) associated with one level while the term overall is not significant (i.e., the difference between the model with and without this term is not significant). So a predictor with at least 5 levels might include one level that produces a significant estimate while overall the term is not significant. It is also possible, with so many tests using one sample of data that you have a type I error for the significant test of the parameter estimate.

Your response might be influenced by more than the single nominal predictor, so the initial fit with a single predictor might not be significant. A sample R square of 0.05 is not uncommon with an ordinal response. Are the frequencies of the individual levels very different? Was there significant lack of fit?

If you mean by '*changed the roll play*' that you reversed the X and Y role of the same variables, then an association between them remains regardless of the role, although the numerical results will change because the error is assumed to be in the Y role alone.

If you mean by '*it didn't take an expert to say that there is not an interesting difference*' that the effect was not pronounced, I think that is consistent with the impression conveyed by the estimates as well. With sufficient sample of data, you may be able to find significant parameter estimates of small effects. The odds ratios are not large compared to the null value of 1 (no association), but the *p*-values leads us to decide that they are significant (with accepted type I error rate of 0.05).

Is it possible that there are effects of this response that are as of yet unaccounted for by any of the previous models?

Learn it once, use it forever!