Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- JMP User Community
- :
- Discussions
- :
- No of DF in p-value calculation from t Ratio in Parameter Estimates table (Stand...

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Aug 14, 2019 3:02 AM
(1058 views)

Hello,

I may have a trivial question, however I have not find an answer anywhere else.

I am runing a linear regression model with standard least squares (DOE made with DSD). In the Parameter Estimates table (in the attached picture) t Ratios and p-values (Prob > ItI) can be seen for each included effect in the model. I wonder what is t critical in the test or to put in another way what is the **used number of degrees of freedom** for the two sided t-test with 5 % of signifance level.

With trial and error I figured it out that it is the number of degrees of freedom used for Error. It also seem odd to me, that the probabilites form F ratio are the same as from t ratio. Could you please explain the logic behind it? Why not all degress of freedom are being used (16 in example instead of 11)?

Thanks, Danijel

1 ACCEPTED SOLUTION

Accepted Solutions

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Created:
Aug 14, 2019 4:24 AM
| Last Modified: Aug 14, 2019 4:25 AM
(1049 views)
| Posted in reply to message from Danijel_V 08-14-2019

You should look at the Statistic Details chapter in **Help** > **Books** > **Fitting Linear Models**.

The *t* ratios use the error DF. You can use the JSL function **q = t Quantile( probability, df )** to obtain the 'critical' quantile. Use *p*/2 for the two-sided case. See **Help** > **Scripting Index** > **Functions** > **Probability** > **t Quantile**.

In the case of a balanced design (your case), the *F* ratios and *t* ratios test the same hypothesis. So the *F* ratio is the square of the *t* ratio. They must yield the same *p* value in this special case.

You do not understand 'degrees of freedom.' The sum of squares includes more terms than the amount of independent information. So the DF correct for that in the mean squares. If you estimated 15 parameters, for example, you would have only 1 DF of independent information left to assess the estimates. You cannot '*eat your cake and have it, too*.'

Learn it once, use it forever!

1 REPLY 1

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Created:
Aug 14, 2019 4:24 AM
| Last Modified: Aug 14, 2019 4:25 AM
(1050 views)
| Posted in reply to message from Danijel_V 08-14-2019

You should look at the Statistic Details chapter in **Help** > **Books** > **Fitting Linear Models**.

The *t* ratios use the error DF. You can use the JSL function **q = t Quantile( probability, df )** to obtain the 'critical' quantile. Use *p*/2 for the two-sided case. See **Help** > **Scripting Index** > **Functions** > **Probability** > **t Quantile**.

In the case of a balanced design (your case), the *F* ratios and *t* ratios test the same hypothesis. So the *F* ratio is the square of the *t* ratio. They must yield the same *p* value in this special case.

You do not understand 'degrees of freedom.' The sum of squares includes more terms than the amount of independent information. So the DF correct for that in the mean squares. If you estimated 15 parameters, for example, you would have only 1 DF of independent information left to assess the estimates. You cannot '*eat your cake and have it, too*.'

Learn it once, use it forever!

Article Labels

There are no labels assigned to this post.