Hi, I am having the same issue as she is. My R+ is 1527, my sample has 76 pairs, and 11 0's. I try plugging my values into that formula and I get . When JMP is returning 581 for S. Please help me I am going nuts. S = R+ - 1/4[(N(N+1)-do(do+1)] S= 1527-1/4[76(76+1)-11(11+1)] S=97 Also how is this calculating M for the sign statistic? THANK YOU!!!!!!!!!!!!!! I should mention when adding these in the matched pair test under analyzed I am putting dad height first then height (which is the opposite of how I want it to read out) Rachel
... View more
Parameter Estimates and the Saved Prediction Formula
This is in JMP v15. Maybe in earlier versions of JMP the formula was expressed differently.
Since the link function is Log, the prediction must be exponentiated back to units of X.
Not a Poisson expert, but its a fun distribution because the center and scale parameters are equal. So when X is big it looks normal, and when X is close to 0 it looks log normal.
This is from the scripting index, super useful for understanding the behavior of the distribution at different levels of lambda
Names Default To Here( 1 );
lambda = 4;
New Window( "Example: Poisson Probability",
pdy = Graph Box(
Y Scale( 0, 0.20 ),
X Scale( -1, 40 ),
Pen Color( "red" ),
Pen Size( 2 );
For( k = 0, k <= 40, k++,
Poisson Probability( lambda, k )
Round( lambda, 2 )
H List Box(
Slider Box( 0, 40, lambda, pdy << reshow ),
Text Box( " \!U03BB" )
... View more
There are many ways to think about (i.e., hypothesize) tests. The common whole model test assumes (null hypothesis) that the model is not significant. Another way to say it is that all the parameters (except for the constant) are 0. The alternative hypothesis is that not all the parameters are 0. So you might think about the whole model test as a way to answer the question, "Are any of the terms in this model significant?" You can also ask this question of individual terms, of course.
The goodness of fit test is asking, "Is there evidence that I am missing terms in the model for residual effects in the response?" That is a concern for lack of fit. So the test assumes (null hypothesis) that the model is a good fit. The alternative hypothesis is that the model is not a good fit because it seems that there is variation in the response that is not entirely accounted for by the variance part of the model. The saturated model provides the estimate of the variance so that it can be compared to the variance of the fitted model. They won't be identical, but is the difference unusual (statistically significant)?
Perhaps we should talk about a good model versus a better model.
... View more