Hi @Mickyboy : I understand...and apologies if I'm not making myself clear and/or coming across as somehow argumentative; my only intention here is to offer some guidance where it may prove helpful. That said, I'll respond to your above comment "...l thought it's one thing to transform your variable to something that approaches a normal distribution, and it's very easy to do in JMP as highlighted above, but how do you back transform." .

Yes, there are good reasons to transform your variable to something that is approximately normal. Indeed. But, using any of these distribution functions will not, *by definition*, do that.

In your case, you applied either the PDF or CDF (see my post above) function to your variable (let's call it X).

1. If you used PDF: All this does is create the t distribution curve (the blue line in pic below).__So, for each X (x-axis) you will get a point (y-axis) on the blue curve if you apply this function.__

2. If you used CDF: All this does is calculate area under the curve (probability) of the PDF. In the example below, the area under the blue line to X=65.104 is 0.439265. So, Prob(X<65.104) = 0.439265 if X has a t-distribution (given your values of Location, Scale, and DF). __So, for each X you will get a probability. if you apply this function.__

So, whichever function you choose, PDF or CDF, it can't result in a normal distribution (or t-distribution); all it does is assume your data is from a t-distribution and then generate a curve and/or calculate probabilities. In fact, the CDF (no matter what the distribution of X is) has a Uniform Distribution (special case, a = b =1, of the Beta Distribution), but that is another topic for perhaps another day.

https://en.wikipedia.org/wiki/Inverse_transform_sampling

https://math.stackexchange.com/questions/1564584/prove-uniform-distribution#:~:text=Proof%3A%20In%20....