I am a regular user of JMP, but without experience in scripting. I have used Fischer's exact test on my analysis, and am getting the results as posted. I have 2 questions:
1. Why do I get 2 difference exact tests and p-values? Which is the one to use?
2. In the odds ratio, the confidence interval does not match the p-value (which is <0.05). I am guessing it has something to do with the way with the CIs are calculated. Can you advise the best way to calculate the exact CI here? I am not experiencing this issue with other exact analyses used in the same dataset
I have no idea what you might have done wrong or differently with this analysis compared to other analyses with the same data set. What other analysis is there?
I do not get the same result for the analysis of the same data. See my result:
Thanks for your replies. I also think the 'weight' (which is a sampling weight for the study design) may have changed things, as @Thierry_S suggests. If I only use the 'counts' from this 2*2 table in the 'frequency' section of the contingency platform, I am getting the same results as you, @markbailey . As for the alternative hypothesis, I would need to use the one that corresponds with two-tailed values only
Why are weights for non-responders used? How are the weights determined?
Using the weight role will change the result of the analysis.
This was a survey with ~50% response rate with differences between responders and non-responders, thus our decision to adjust for non-response. Weights were calculated on the basis of 3 baseline characteristics that affected response. Weight calculation was done as follows: for each combination of the 3 variables, the total number with that combination in the original cohort and was divided by the number of patients with that combination who completed a survey.
How do you know "differences between responders and non-responders" if the non-responders did not respond? I better let someone who is more familiar with this kind of analysis take over!
Hello @markbailey . I was wondering if you had any additional insights to this. I haven't found a solution to this problem yet.
I tried making contingency tables (with the numbers I get after weighting)- this seems to give me OR and 2-tailed p-value for the exact test which are consistent with each other. I ran these for several analyses, which seem to give similar but not exactly the same results compared to when the weighting variable is used.