I am teaching a 2nd semester stat methods course to grad students not in statistics. This year is the first time in 8 years that I have taught it. 8 years ago, everything was SAS-based. This year, I am letting students choose among R (which I can support easily), SAS (also easily supported) and JMP (also easily supported for most things). My issue is setting up a power / sample size calculation in JMP. As far as I can tell, JMP requires you to set up an analysis, perhaps with mock data, then revise sigma, delta, and n for the situation at hand. I can do that either after Fit Y by X / pooled t-test or Fit model dialogs.

My concern is that the JMP result differs from hand-calculations, R computations, and SAS computations (all essentially the same). Here's what I'm asking in the upcoming homework (slightly different language because the actual HW problem is part c in a sequence of questions).

Imagine a two independent sample comparison of means. Assume sigma = 4.1 and delta (difference in means) = 5.98. If you use n=5 for each group, what is the power of an alpha = 0.05 two sample t-test? JMP tells me 0.59:

SAS (proc power; twosamplemeans test=diff stddev=4.1 npergroup=5 meandiff=5.98 power=.;

run; and R (power.t.test(n=5, delta=5.98, sd=4.1, strict=TRUE)

give me 0.527. Hand calculation using a shifted-t approximation gives me essentially 0.5.

What am I doing wrong when I set up the problem in JMP. More particularly, what do tell the JMP-using students on Friday so they can do the problems correctly?

Thanks,

Philip Dixon