Hello,
I was curious about something. I was looking at goodness-of-fit under Distributions using Fitted Normal Distribution -> Goodness-of-Fit Test and I know from what I read on interpreting the results that if the p-values are small then you reject the null and can conclude that the data is not normally distributed. So my initial look at the column was obvious that it is not normally distributed (which I expected).

So then I tried normalizing using the Johnson Normalize and reran it with that and it didn't work.
So then I tried the Normal Quantile method and ran it again. At first glance I thought it worked but then I noticed that the Shapiro-Wilk p-value = 0.0027 (which would warrant rejecting the null) but the Anderson-Darling p-value = 0.1672 which does not.
Question 1: What do you do when they conflict like that? Do they both need to provide the same conclusion?
Question 2: Is it bad to try different methods of normalization like that? If so, do you have any suggestions for documentation or videos to help choose the best one?