cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Browse apps to extend the software in the new JMP Marketplace
Choose Language Hide Translation Bar
ron_horne
Super User (Alumni)

Sidak correction for t test

Dear fellows of the community,

I would like to ask you if anyone knows of a way to produce multiple t tests corrected by the Sidak correction?

https://en.wikipedia.org/wiki/%C5%A0id%C3%A1k_correction_for_t-test

In particular, I would like to use it in a context of multiple regression for comparison between categories.

For example, using the big class data table and running the following model: height = weight + sex + age + age*sex. Now I would like to compare each group in the interaction term age*sex to all others using the Sidak correction.

 

i do not need to do this many times so do not mind to produce the final letters report manually.

Thank you very much,

Ron

2 REPLIES 2
Byron_JMP
Staff

Re: Sidak correction for t test

Ron,

Just curious, what does the Sidak correction get you that Bonferroni, Tukey or FDR doesn't?

 

-B

JMP Systems Engineer, Health and Life Sciences (Pharma)
ron_horne
Super User (Alumni)

Re: Sidak correction for t test

Dear @Byron_JMP, thanks for asking

Sidak correction gets me nothing. yet, when requested by an anonymous reviewer in an academic peer reviewed journal I had no other choice....

I think that fundamentally, this is a case I come across occasionally where similar (or even identical) statistical tools are used commonly in different disciplines or just named differently. Such as VIF Vs. tolerance, constant Vs. intercept, Beta Vs. standardized coefficients and so on.

on the other hand, while we are on the topic, could you please let me know if there is any other way adjusting for Bonferroni apart from "manually" setting the alfa level in the fit model platform and then running a student's t test?

The issue is that with a few factors, each with a different number of categories, I need to run the model several times, each time with a different specified alfa. wouldn't it be better for the alfa setting to be per factor rather than for the whole model?

What i currently do is run the model with the post hoc test i want, save the script and then tweek the script to have different alfas for each factor.

Another point for anyone using this method - when setting a different alfa level than 0.05 pay attention that the connecting letters report is adjusted while the default formatting of pvalues (colors and stars) in the ordered difference or the detailed report is not adjusted!

Anyone willing to share his experience on the topic is welcome.