cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
utkcito
Level III

meaning of random effects p-value in the report

Hello,

following up on a previous thread (https://community.jmp.com/t5/Discussions/MANOVA-repeated-measures-two-factors-multiple-comparisons/m...), I have done and experiment on 20 subjects (data for #6 is missing - total of 19 samples). We have 5 sampling times. There are patients who had complications and those that didn't. We have the data normalized by the median, and I am using the log-transformed normalized data to make it more gaussian. I have prepared a mixed model to assess the effects of the patients as a random effect, and the sampling time and complications as fixed effects. See attached data.

 

When running the model, sometimes the patient[complications] (i.e. patients nested in complications) random effects has a p-value lower than 0.05 in the "Random Effects Covariance Parameter Estimates"... what does it mean? the random effects variable cannot be significant since I am defining it as random, or am I getting something wrong (conceptually)?

 

thanks,

 

Uriel.

1 ACCEPTED SOLUTION

Accepted Solutions

Re: meaning of random effects p-value in the report

This distinction is a common point of confusion.

The linear model contains two kinds of effects: fixed and random. Fixed effects contribute to the response mean. They are attributed to changing factor/predictor levels. They are reproducible. Random effects contribute to the response variance. At a minimum, the random errors are the random effect. In other cases, there are additional sources (e.g., subjects, days, et cetera). They are not attributed to a particular factor/predictor but to a group or sample of observations. They are not reproducible (e.g., new days, new subjects, et cetera). Generally the random contribution is small relative to the mean and we simply want to account for it, not explain it or assign a cause.

The random effects are (usually) modelled as a normal distribution with a mean of zero. They do not bias the response mean.

View solution in original post

3 REPLIES 3

Re: meaning of random effects p-value in the report

Of course a random effect can be significant. That conclusion means that the variance represented by this effect (a random effect is a variance) is significantly different from zero (no variance from this effect).

utkcito
Level III

Re: meaning of random effects p-value in the report

then doesn't that mean by definition that it is not random? or rather that it is a bias (i.e. a shift of the intercept) rather than a zero sum variance?

Re: meaning of random effects p-value in the report

This distinction is a common point of confusion.

The linear model contains two kinds of effects: fixed and random. Fixed effects contribute to the response mean. They are attributed to changing factor/predictor levels. They are reproducible. Random effects contribute to the response variance. At a minimum, the random errors are the random effect. In other cases, there are additional sources (e.g., subjects, days, et cetera). They are not attributed to a particular factor/predictor but to a group or sample of observations. They are not reproducible (e.g., new days, new subjects, et cetera). Generally the random contribution is small relative to the mean and we simply want to account for it, not explain it or assign a cause.

The random effects are (usually) modelled as a normal distribution with a mean of zero. They do not bias the response mean.