Hi @billi,
I understand that you want to understand the "level of error" from assuming that data have a normal distribution when they really come from a non-normal distribution. Is that correct? That is an interesting question.
Can I ask why you want to do that? Do you have a real situation where you need to do this that you can tell us about?
I am not sure how the data were generated in your example - maybe this is from a real process? But your Compare Distributions analysis suggests that the log-normal distribution might be the best distribution.
Having said that, the Gamma and Normal distributions have a similar AICc. AICc is a measure of goodness-of-fit and we generally say that a difference in AICc of less than 2 is not significant.
So that would suggest that, in this case, that the "level of error" in location/scale/central tendency and dispersion/shape/variance will be small from assuming a normal distribution if the distribution is really log-normal. You can see this from the fitted distributions on the plot - they are almost indistinguishable (especially as they are a similar colour!).
Does that help? Do you have a particular way that you might want to quantify "level of error"? How do you know what is the "correct" distribution in order to then look at the error from assuming an "incorrect" normal distribution?
Phil