cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar

Different types of mean comparisons yield different results; why ?

Hello dear community,

 

I am comparing the means of three (small) samples. I have three groups (A, B, C). If I compare them in two steps (A with B, B with C) I get significant mean difference according to t-test and pooled t test. 

However, if I run a comparison on A, B and C at the same time, using "Each pair Student's t" , I get no significant difference.

(Example is provided as a picture).

 

Could someone please help me understand the theory behind the difference in the results ?

 

Thank you very much !ttest_jmpcommunity.png

1 ACCEPTED SOLUTION

Accepted Solutions
Victor_G
Super User

Re: Different types of mean comparisons yield different results; why ?

Hi @A_Random_Name,


Using a specific statistical test imply having met assumptions of this test.
For t-test, four main assumptions should be met :

  • The data are continuous.
  • The sample data have been randomly sampled from a population.
  • There is homogeneity of variance (i.e., the variability of the data in each group is similar).
  • The distribution is approximately normal.

(Source : The t-Test | Introduction to Statistics | JMP)

 

In your case, you may have several problems to use a t-test here, which may explain the differences you see :

  • Because of the low sample size, it may be difficult to check normality, so a parametric test (a statistical test assuming normal distributions) may not be recommended. To check (approximate) normality, you can have a look at the Normal Quantile Plot.
  • Homogeneity of variance may also not be respected (in your example, group B seems to have no variance, group C a very small variance and group A more variance). To check this, you can test "Unequal Variance" in the red triangle of the "Fit Y by X" platform. JMP will also provide you Welch's test, a modified t-test not assuming equal variances between your 2 groups, which may be informative for some of the comparisons you're doing. 
  • If you want to compare more than two groups, t-test shouldn't be used, but Tukey-Kramer (in case of parametric test, to limit type I error) or Steel-Dwass (non parametric version). There are a lot of statistical tests available, take time to see which one is the best fit to your problem, in terms of goals and assumptions met.

Also take a look at confidence interval, you'll see in your comparisons that you have overlap of your Confidence Intervals, which may indicate that you don't have a large effect size and/or a real statistical difference. Due to the low sample size and assumptions not met for a parametric test like t-test, the significant results may be only type I errors (mistaken rejections of an actually true null hypothesis, the true null hypothesis being here that the groups means are the same/equivalent).

 

Don't hesitate to check the JMP statistical portal or the JMP help in the Fit Y by X platform to get more details about statistical tests available.
I hope these first informations will help you

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)

View solution in original post

5 REPLIES 5
Victor_G
Super User

Re: Different types of mean comparisons yield different results; why ?

Hi @A_Random_Name,


Using a specific statistical test imply having met assumptions of this test.
For t-test, four main assumptions should be met :

  • The data are continuous.
  • The sample data have been randomly sampled from a population.
  • There is homogeneity of variance (i.e., the variability of the data in each group is similar).
  • The distribution is approximately normal.

(Source : The t-Test | Introduction to Statistics | JMP)

 

In your case, you may have several problems to use a t-test here, which may explain the differences you see :

  • Because of the low sample size, it may be difficult to check normality, so a parametric test (a statistical test assuming normal distributions) may not be recommended. To check (approximate) normality, you can have a look at the Normal Quantile Plot.
  • Homogeneity of variance may also not be respected (in your example, group B seems to have no variance, group C a very small variance and group A more variance). To check this, you can test "Unequal Variance" in the red triangle of the "Fit Y by X" platform. JMP will also provide you Welch's test, a modified t-test not assuming equal variances between your 2 groups, which may be informative for some of the comparisons you're doing. 
  • If you want to compare more than two groups, t-test shouldn't be used, but Tukey-Kramer (in case of parametric test, to limit type I error) or Steel-Dwass (non parametric version). There are a lot of statistical tests available, take time to see which one is the best fit to your problem, in terms of goals and assumptions met.

Also take a look at confidence interval, you'll see in your comparisons that you have overlap of your Confidence Intervals, which may indicate that you don't have a large effect size and/or a real statistical difference. Due to the low sample size and assumptions not met for a parametric test like t-test, the significant results may be only type I errors (mistaken rejections of an actually true null hypothesis, the true null hypothesis being here that the groups means are the same/equivalent).

 

Don't hesitate to check the JMP statistical portal or the JMP help in the Fit Y by X platform to get more details about statistical tests available.
I hope these first informations will help you

Victor GUILLER

"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)
ron_horne
Super User (Alumni)

Re: Different types of mean comparisons yield different results; why ?

Hi @A_Random_Name ,

On top of all these I would like to add that significance is not everything. you need to ask yourself whether the differences between the groups are meaningful. your graphs show some type of range using blue lines in-which all the dots are within. if you only need to be within the blue line it looks like you are good to go including with the most extreme cases in the sample.

 

statman
Super User

Re: Different types of mean comparisons yield different results; why ?

First, welcome to the community.  I'll give you another explanation...significance of any model effect is contingent on what effects are in the model.  If you add or remove terms from the model, you can/will change the model effects (estimates, significance, etc.).  You might also question collinearity.

"All models are wrong, some are useful" G.E.P. Box
peng_liu
Staff

Re: Different types of mean comparisons yield different results; why ?

I suspect the assumptions are different. Seems that you have masked the Level names, and I am not sure whether there is any errors that you made during the masking. I am going to highlight 4 numbers in your screenshots, and I hope they are corresponding to one another (e.g. on the surface second ANOVA shows -1.5 between B/C, and there is a 1.5 in LSD, but it appears to be A/C, so I am not sure).

peng_liu_0-1658370119304.png

I want to point out that the values of differences are same, but Standard Errors of the differences are different. In ANOVA, there are notes: Assuming equal variances. In LSD comparisons, I don't see explicit statement about equal or unequal variances assumptions in documentation, but I believe the assumption is unequal variances. And the detail of calculating Standard Error can be found on this page Comparison Circles 

 

Re: Different types of mean comparisons yield different results; why ?

Thank you all very much for your answers. They helped me understanding the differences and gave me new ideas how to analyze the data. I will "accept" one solution but obviously all answers are complementary. Thank you all and have a nice day !