cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
QW
QW
Level III

Mindset when interpreting mixed model results

Hello,

 

I have recently been exploring the use of mixed models as opposed to simpler fixed effects models. I have a few questions that I haven't found clear answers to by reading documentation.

 

1. What exactly are the variance components of the random effects?

Assuming the mixed model is: Mixed model: Response = intercept + fixed effects + random effects + residual error

- If I understand correctly, fixed effects are represented with effect estimates, standard errors and p-values, and the residual error is what is left over between the actual data vs. predicted values using the fixed effects. This residual error is assumed to be a normal distribution with mean = 0. Is it correct to say that JMP is then partitioning this residual error between 'random effects' and 'residual', by using REML to estimate what a normal distribution with mean = 0 should be for the 'random effects'? If so, is there an intuitive way to understand how JMP is assigning this 'amount' of variance to each random effect?

 

2. When to assign an uncontrolled variable as 'random' versus just leaving it out of the model?

For example, in the experiment with oven/batch as independent variables to measure mold shrinkage, 'oven' is fixed as I am specifically interested in comparing the effect of each oven on shrinkage, while 'batch' is just a source of random variability - I don't really care about specific batch-to-batch variation, just how much overall variability is contributed by varying batches. However, I believe that I would get the same estimate of 'oven', regardless of whether I put 'batch' as a random variable, or just exclude it from the model entirely. Therefore, is it correct to say that unless I am curious about what uncontrolled variables are contributing to my overall variability (more common if I have many random variables and want a breakdown), it won't change the model's accuracy with respect to my fixed effects to include random effects or just leave them out?

 

Note: I did see from this talk by Claassen (https://www.youtube.com/watch?v=P1wjRtgM92I) that including 'batch' as a fixed effect would definitely impact predictive capability of the other fixed effect ('oven'), so doing that is a no go. I am asking about whether it matters to have a model with 'oven' as a fixed effect and 'batch' as a random effect, versus 'oven' as a fixed effect and nothing else.

 

Thanks!

QW

Presenter: Elizabeth Claassen (JMP Statistical Discovery) The use of fixed and random effects have a rich history. They often go by other names, including blocking models, variance component models, nested and split-plot designs, hierarchical linear models, multilevel models, empirical Bayes ...
3 REPLIES 3
QW
QW
Level III

Re: Mindset when interpreting mixed model results

Thoughts anyone?

MRB3855
Super User

Re: Mindset when interpreting mixed model results

Hi @QW : I'm sure you've created a lot of interest here...however, there is a lot to unpack. I'll give it a start.

In no particular order.

1.  You: "Is it correct to say that JMP is then partitioning this residual error between 'random effects' and 'residual'?"

No. You can see this if, using your example from part 2 of your question, if you compare the residual error of the fixed model (oven and batch fixed), vs the residual error of the mixed model (batch is random, oven is fixed). You will notice that they are the same (or nearly so, depending on the lack of balance wrt the number of reps within each batch and/or the different estimation procedures).

2.   You: " is it correct to say that unless I am curious about what uncontrolled variables are contributing to my overall variability (more common if I have many random variables and want a breakdown), it won't change the model's accuracy with respect to my fixed effects to include random effects or just leave them out?". 

Maybe:  As you say, the oven "estimate" will be about the same. but the precision (standard error) of that estimate can be very different...especially, if batch to batch variability is really high. This plays in to sample size; e.g., if batch to batch variability is the dominant source of variability, then you the sample size that "matters" wrt precision is the number of batches. If variability between batches is negligible, then the sample size that "matters" is the number of samples within each batch (or total sample size). In the latter case, the precision of that estimate will be about the same as just leaving the random effect out.

 

As I said, there is a load to unpack here...and perhaps too much for such a forum. but, now we have a start!    

QW
QW
Level III

Re: Mindset when interpreting mixed model results

Thanks!