cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
Reinaldo
Level IV

Effect size in a repeated-measures ANOVA

Hi all,
 
I perform a repeated-measures ANOVA using Fit Model (Personality = Standard Least Squares; Emphasis = Minimal Report or Effect Leverage; Method =REML) and the Fit Least Squares report does not show the Sum of Squares of Model (SS_M) and Sum of Squares of Total Variance Amount (SS_T).
 
I need SS_M and SS_T to calculate the effect size (i.e., eta squared):
 
eta squared = SS_M / SS_T
 
Would you please help me to find those parameters (SS_M and SS_T)?
 
Thank you.
~Rei
1 ACCEPTED SOLUTION

Accepted Solutions
cwillden
Super User (Alumni)

Re: Effect size in a repeated-measures ANOVA

I suggested Fit Y by X because it does have an option to specify a Block column, which is a potential way to handle your repeat measures.  The only conditions for Fit Y by X is the blocks have to be balanced and you can only have 1 treatment factor.  If you're dead set on using REML, I don't know how to get the full ANOVA table from Fit Model with random effects.  I don't think you can.

 

I guess you could work it out backwards by hand.  The Fixed Effects test will give you the F-ratio and the numerator and denominator degrees of freedom.  The MSE is in the variance components table.  First, we need to solve for the treatment mean squares (let's call it MSM), and the total mean squares (MST).

 

To get MSM, multiply the F-ratio by MSE.  Now, add MSM and MSE together to get MST.

 

Now, all that's left to do is multiply each of those by their respective degrees of freedom to get SSM and SST.  SSM = MSM*DF (numerator degrees of freedom).  SST = MST*(DF + DFDen).

 

To my understanding, computing the effect size from this approach makes it relative to the total variance remaining after accounting for random effects.  If you want to calculate it relative to the total variance including random effects, you could compute SST on your response variable column with a column formula like Col Sum((:Y - Col Mean(:Y))^2).  

-- Cameron Willden

View solution in original post

7 REPLIES 7
cwillden
Super User (Alumni)

Re: Effect size in a repeated-measures ANOVA

I'm not sure how to coerce that information out of Fit Model, but if it's a simple one-way ANOVA with repeated measures, you could use Fit Y by X with the subject ID as a blocking factor.  When I tried it, I got an equivalent p-value for the treatment factor between Fit Y by X and Fit Model.  Fit Y by X will provide the full ANOVA table, so you can compute eta^2.

-- Cameron Willden
Reinaldo
Level IV

Re: Effect size in a repeated-measures ANOVA

Hi @cwillden, thank you for your mail and information.

It seems the random effect (REML) is not considered in Fit Y by X. So, when I use Method = REML in a repeated-measures ANOVA, how could the effect size be calculated?

Thank you.

~Rei
cwillden
Super User (Alumni)

Re: Effect size in a repeated-measures ANOVA

I suggested Fit Y by X because it does have an option to specify a Block column, which is a potential way to handle your repeat measures.  The only conditions for Fit Y by X is the blocks have to be balanced and you can only have 1 treatment factor.  If you're dead set on using REML, I don't know how to get the full ANOVA table from Fit Model with random effects.  I don't think you can.

 

I guess you could work it out backwards by hand.  The Fixed Effects test will give you the F-ratio and the numerator and denominator degrees of freedom.  The MSE is in the variance components table.  First, we need to solve for the treatment mean squares (let's call it MSM), and the total mean squares (MST).

 

To get MSM, multiply the F-ratio by MSE.  Now, add MSM and MSE together to get MST.

 

Now, all that's left to do is multiply each of those by their respective degrees of freedom to get SSM and SST.  SSM = MSM*DF (numerator degrees of freedom).  SST = MST*(DF + DFDen).

 

To my understanding, computing the effect size from this approach makes it relative to the total variance remaining after accounting for random effects.  If you want to calculate it relative to the total variance including random effects, you could compute SST on your response variable column with a column formula like Col Sum((:Y - Col Mean(:Y))^2).  

-- Cameron Willden
Reinaldo
Level IV

Re: Effect size in a repeated-measures ANOVA

Brilliant solution, Cameron! Thank you very much! I just would like to understand one point:

 

In the Fit Model, I added 

 

Subject & Random

Timepoint

Subject * Timepoint & Random

 

in the Construct Model Effects.

 

In the Fit Model report, REML Variance Component Estimantes table describes the random effect rows for Subject, Subject * Timepoint and Total. In this way, when you suggest to calculate the MSM, should I choose the MSE (Var Component) of Subject to multiply by F-ratio?

 

Thank you!

~Rei
cwillden
Super User (Alumni)

Re: Effect size in a repeated-measures ANOVA

I would guess the subject*timepoint variance component is confounded with the residual, which is why the residual variance component is not in the REML table. That is MSE, not subject.
-- Cameron Willden
Reinaldo
Level IV

Re: Effect size in a repeated-measures ANOVA

Thank you for your comments, Cameron! Oh, yes, MSE refers to Error, not Subject. It's correct! So, is your suggestion based on the Fit Y by X rather than Fit Model (Fit Least Square)? 

 

I couldn't find MSE in the REML Variance Estimate table (Fit Model report). However, I think I can consider the RMSE from the Summary of Fit (i.e., MSE = RMSE^2) to calculate MSM.

~Rei
cwillden
Super User (Alumni)

Re: Effect size in a repeated-measures ANOVA

What I’m saying is based on Fit Model. Typically, the REML table will have an entry for residual variance. That variance component is the MSE. Because you included the subject*treatment random effect, that is most likely replacing the residual variance component since they would be completely confounded (unless you have repeat measures on each subject*treatment combination). In that case, the subject*treatment variance component is the MSE. I would recommend just removing that subject*treatment random effect since you really can’t distinguish it from residual error. If you do that, you should find that the REML table is exactly the same except the label for Subject*Treamtment is replaced with “Residual”.

You should find the RMSE is the square root of the subject*treatment variance component. Either way, you should get to the same error.
-- Cameron Willden