cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar

Why are my predicted values for my model (ZI negative binomial) all the same?

So, I've got a number of zero-inflated negative binomial models and I want to graph them so I've been saving the predicted values. For the models that don't have any significant factors though, the values are all the same. Why is this? Shouldn't there still be at least some sort of trend, even if it's not significant? Similary, one of my models had a significant interaction (between my blocking factor and my main continuous factor). One of the blocks has all of the same values while the other block exhibits a trend.

 

Why am I seeing this?

 

Edit: Also, it seems like all my metrics have the same exact trend. While I know that's not completely impossible, it seems strange that they're almost all exactly the same. Does this have something to do with me using a negative binomial distribution?

13 REPLIES 13
NoScoped
Level I

Re: Why are my predicted values for my model (ZI negative binomial) all the same?

Thanks Mark. Just to clarify, a significant p-value for the test indicates overdispersion. And it then automatically adds an overdispersion parameter to the model? Or does it just estimate the magnitude of the parameter needed?

 

Edit: Additionally, what does the Deviance parameter in the Goodness of Fit test signify?

Re: Why are my predicted values for my model (ZI negative binomial) all the same?

The check box in the Fit Model dialog is for both the dispersion parameter and the test for dispersion. If you decide that the dispersion is not significant, then run the model again after un-checking this option. Otherwise, you get both the test and a model with the extra over-dispersion parameter.

 

The whole model test is based on a likelihood ratio test between your model and the reduced model. It is an omnibus test to decide if the model, as a whole, is significant compared to the marginal distribution (response independent of all terms except intercept.) We generally hope that this test is significant!

 

The deviance test is based on a likelihood ratio test between your model and the saturated model. The saturated model is a perfect fit, that is, without bias. The deviance test is an omnibus test to decide if the model is biased. We generally hope that this test is not significant! It is a kind of lack of fit test for the GLM and other related models.

NoScoped
Level I

Re: Why are my predicted values for my model (ZI negative binomial) all the same?

Thanks for the reply Mark. If I have a significant deviance test, what would be the appropriate next steps to address this bias? Both my Pearson and Deviance Prob>ChiSq are significant no matter what predictors/covariates are included.

 

Edit: The help books provided in JMP state that if I have a significant deviance test, I "may need to add higher-order terms to the model, add more covariates, change the distribution, or (in Poisson and binomial cases especially) consider adding an overdispersion parameter." Besides changing the distribution, I can't really pursue any of these other options. 

Re: Why are my predicted values for my model (ZI negative binomial) all the same?

I understand. I think that including the over-dispersion parameter helps a lot. Even so, the significant deviance means that you must accept biased estimates of the parameters and the mean response if you choose to use this model. The linear predictor may include transformations such as powers and cross terms if the data set supports their estimation. If you have a continuous predictor with only two levels, then you cannot include powers, for example.

 

Your model might be limited by the data that is available.