Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- Re: Practical Explanation of Parameter Estimates

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Jun 28, 2017 1:01 PM
(10369 views)

I am working with the Fit Model Platform with a continous output variable and multiple nominal input variables.

I understand that the within the Parameter Estimates table the T value is there to test wheteher or not the estimate is equal to zero. In a practical sense, if I have a variable (Cell Lot) with 3 categories (1, 2 and 3) and the two listed (1 & 2) in the Parameter Estimates Table are less than the alpha of 0.05, what is the conclusion that can be made? Are 1 and 2 significantly different from 3?

I already know from the Effects tests that this variable (Cell Lot) is a significant contributer to the output.

Please forgive the naive question.

2 ACCEPTED SOLUTIONS

Accepted Solutions

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Created:
Jun 28, 2017 1:32 PM
| Last Modified: Jun 28, 2017 1:43 PM
(14006 views)
| Posted in reply to message from iPSC 06-28-2017

You are correct. The **Effect Test** is based on the *type III sum of squares* associated with adding a term to the model. It is the *F* test. You have one *F* test for each term. This test is useful for model reduction and inference about factor effects.

On the other hand, the **Parameter Estimates** reports the *t* test because it compares the estimate to the value of the null hypothesis, which is that the parameter is zero. (You can test against other null hypotheses with a *t* test but JMP does not provide such a test. There is a script for this purpose.) You have one *t* test for each estimate.

If you want to see the results for the last level, click the red triangle at the top next to Fit Least Squares and select **Estimates** > **Expanded Estimates**. (JMP does not report the last level by default because the estimate of the last parameter * must be *equal to the negative of the sum of the other parameter estimates. You can enable Expanded Estimates in the platform preferences if you like.)

The interpretation of these tests is limited to an independent test versus 0. Your example concludes that the estimates for level 1 and level 2 are different from zero. That is all that you can say. These tests do not compare these levels to the last level. You could use an additional *contrast* for this purpose.

You also have to be concerned about the multiple comparisons issue of inflated type I error rate with all these tests.

Learn it once, use it forever!

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Yes, Tukey's method of multiple comparisons would be the best way to all of the lots to each other. Do not use the Student t method as it does not adjust for the number of comparisons, so your type I error rate over all the comparisons will increase much.

Yes, you can use the *p*-values as usual with the appropriate adjustment.

Learn it once, use it forever!

7 REPLIES 7

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Created:
Jun 28, 2017 1:32 PM
| Last Modified: Jun 28, 2017 1:43 PM
(14007 views)
| Posted in reply to message from iPSC 06-28-2017

You are correct. The **Effect Test** is based on the *type III sum of squares* associated with adding a term to the model. It is the *F* test. You have one *F* test for each term. This test is useful for model reduction and inference about factor effects.

On the other hand, the **Parameter Estimates** reports the *t* test because it compares the estimate to the value of the null hypothesis, which is that the parameter is zero. (You can test against other null hypotheses with a *t* test but JMP does not provide such a test. There is a script for this purpose.) You have one *t* test for each estimate.

If you want to see the results for the last level, click the red triangle at the top next to Fit Least Squares and select **Estimates** > **Expanded Estimates**. (JMP does not report the last level by default because the estimate of the last parameter * must be *equal to the negative of the sum of the other parameter estimates. You can enable Expanded Estimates in the platform preferences if you like.)

The interpretation of these tests is limited to an independent test versus 0. Your example concludes that the estimates for level 1 and level 2 are different from zero. That is all that you can say. These tests do not compare these levels to the last level. You could use an additional *contrast* for this purpose.

You also have to be concerned about the multiple comparisons issue of inflated type I error rate with all these tests.

Learn it once, use it forever!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Practical Explanation of Parameter Estimates

Thank you very much for you detailed answer Mark.

As a follow up, if I did want to explore whether or not Cell lots #1,2, and 3 were significantly different from one another using this model. Would choosing Multiple Comparisions and then Tukey or Student's T (depending on which is appreopriate) be the appropriate method? When I do this I see an All Pairwise Differences Table with 3 comparisons between each possible two category combination (1&2, 1&3, 2&3). In this case would p values less than alpha indicate a significant difference between those two groups?

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Yes, Tukey's method of multiple comparisons would be the best way to all of the lots to each other. Do not use the Student t method as it does not adjust for the number of comparisons, so your type I error rate over all the comparisons will increase much.

Yes, you can use the *p*-values as usual with the appropriate adjustment.

Learn it once, use it forever!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Practical Explanation of Parameter Estimates

Thank you very much!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Practical Explanation of Parameter Estimates

Sorry for bumping an old thread, but this is the only explanation I've seen anywhere for why JMP does this by default.

Leaving out the N-th parameter becuase it's possible to calcualte the estimate from the preceeding ones seems short-sighted. I'm working on a Tobit regression using parametric survival and the output window doesn't allow for "Expanded Estimates". As you say, I can calculate the estimate, fine, but I cannot estimate the SE, T Ratio or Prob > |T| this way. Those metrics have value, or else they wouldn't be reported at all, and yet I've no way to get JMP to calculate them for me in this situation.

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Practical Explanation of Parameter Estimates

Created:
Feb 11, 2020 11:28 AM
| Last Modified: Feb 11, 2020 11:30 AM
(1930 views)
| Posted in reply to message from crmarvin42 02-11-2020

Please ignore previous reply as it was incorrect. I was thinking of another platform that is launched from Fit Model dialog.

Learn it once, use it forever!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Practical Explanation of Parameter Estimates

Created:
Feb 11, 2020 11:45 AM
| Last Modified: Feb 11, 2020 11:46 AM
(1922 views)
| Posted in reply to message from crmarvin42 02-11-2020

Most platforms will offer the expanded estimates, but apparently not for your situation. Maybe a trick will work to get you what you need.

Suppose your categorical variable has the levels A, B, and C. When you fit your model you will not see the estimate and testing for level C, but you will have tests for A and B. Now, put a Value Ordering property on your categorical variable. Change it to anything you wish as long as C is NOT the last item in the list. For example, I will propose C, A, and B. Now refit your model. You should see testing for C and A, but not B.

Hopefully this workaround will actually work in your situation. Either way, you should look in the JMP wish list for this suggested improvement and vote for it. If the suggestion is not there, please add it!

Dan Obermiller

Article Labels

There are no labels assigned to this post.