Hi @shampton82,
If you look at the equation of R², it involves a ratio between the sum of squares of residuals (from a model) and the total sum of squares (related to the variance of the data):
from Wikipedia : Coefficient of determination - Wikipedia
So R² is a metric indicating the proportion of variability explained by the model.
When part of the variance in the data is attributed to a random effect, this reduce the unexplained part of the variance, so it reduces the residual sum of squares, thus increasing R² (the same way a fixed effect is added).
You can also see this situation looking at RMSE improvement between the two models, going from 5,91 to 0,93 when random effects are included. If you change these effects from random to fixed effects, you'll have the same R² and RMSE metrics.
The inclusion of effects as random or fixed in a model is linked to the way the data has been collected/generated (for example by DoE), the role of the factors and hypothesis/assumptions about their influence on the response(s).
I will let other experts dive deeper into this topic.
Hope this first discussion starter will help,
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)