cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar

Comparing intercepts

I am trying to compare the intercepts of multiple lines. I know I can use Fit Model to examine whether the slopes are similar, but how do I test whether the intercepts are the same? Thanks.
1 ACCEPTED SOLUTION

Accepted Solutions

Re: Comparing intercepts

Solution as provided by Johnathan:

 

It is very easy to test the intercepts as well. You can do this in two ways:

METHOD 1:
For example, say I have a simple regression, meaning one Y and one X. Furthermore, say I have a categorical (nominal) variable called Group. This will be used to produce separate lines for each Group.

In Fit Model, put the following as the model:
X
Group
X*Group

In the output, look at the Effect Tests node. The test for Group is testing whether the intercepts are the same for the groups. The test for X*Group is testing whether the slopes are the same for the groups.


METHOD 2:
You can fit the same model by using dummy variables (1's and 0's). If the categorical variable has 3 levels, then you need 2 columns of dummy variables. The first column (call it Dummy2) has value of 1 where the grouping variable is equal to level 2, and 0 everywhere else. The second column (call it Dummy3) has value of 1 where the grouping variable is equal to level 3, and 0 everywhere else.

In Fit Model, put the following as the model:
X
Dummy2
Dummy3
X*Dummy2
X*Dummy3

Since we used dummy variables for levels 2 and 3 of the grouping variables, the Dummy2 and Dummy3 tests are testing whether levels 2 and 3 of the grouping variable have different intercepts than level 1. The X*Dummy2 and X*Dummy3 are testing the slopes.

The estimated paramters of this model will be different from the parameters of the first method, but the predicted values will be the same.

 


The Effect Tests report node is totally different from an LS Means report. The LS Means are not output in the Effect Tests report.

If you want to examine the estimated slopes and intercepts, I would do the following. On the Fit Model dialog, on the red triangle popup menu, uncheck the option Center Polynomials. That way when you fit a model, all the estimates are directly interpretable. Then fit the model and use the Save Columns >> Prediction Formula command. A new column gets created with the model saved as a prediction formula. You can examine and compare the parameters at that point.

You can also just use Fit Y by X and fit y vs x, and include the categorical variable as a By variable. Then fit each of the lines and look at the intercepts.

The dummy variable approach can be used to get at the slopes and intercepts. Again, uncheck Center Ploynomials for direct interpretation.

You can also have JMP predict Y for any value of x. Before running the model, include rows in the data table for those X's, but leave the Y cells blank for those rows. Then when you have JMP save the fitted model, it saves predicted values for those rows.

For the simple example I outlined before, these methods work great

 

View solution in original post

9 REPLIES 9

Re: Comparing intercepts

It is very easy to test the intercepts as well. You can do this in two ways:

METHOD 1:
For example, say I have a simple regression, meaning one Y and one X. Furthermore, say I have a categorical (nominal) variable called Group. This will be used to produce separate lines for each Group.

In Fit Model, put the following as the model:
X
Group
X*Group

In the output, look at the Effect Tests node. The test for Group is testing whether the intercepts are the same for the groups. The test for X*Group is testing whether the slopes are the same for the groups.


METHOD 2:
You can fit the same model by using dummy variables (1's and 0's). If the categorical variable has 3 levels, then you need 2 columns of dummy variables. The first column (call it Dummy2) has value of 1 where the grouping variable is equal to level 2, and 0 everywhere else. The second column (call it Dummy3) has value of 1 where the grouping variable is equal to level 3, and 0 everywhere else.

In Fit Model, put the following as the model:
X
Dummy2
Dummy3
X*Dummy2
X*Dummy3

Since we used dummy variables for levels 2 and 3 of the grouping variables, the Dummy2 and Dummy3 tests are testing whether levels 2 and 3 of the grouping variable have different intercepts than level 1. The X*Dummy2 and X*Dummy3 are testing the slopes.

The estimated paramters of this model will be different from the parameters of the first method, but the predicted values will be the same.

Re: Comparing intercepts

Jonathan
Thanks for your reply. As I look at the Effect Test - the LSMeans are not the intercepts - they are the "means" when adjusted for x. So in my case I think the lines I have may have the same intercept, but they certainly have different slopes. By the time the get to the "average" x values the lines have diverged quite a bit, so the LSMeans are certainly different. But what I really want to know is whether the intercepts (or at some other small value of x) the y values are different.

I haven't tried the dummy variable approach yet, but do you think this will answer my question or is it just another way to get the LS means?

Thanks again. Dan

Re: Comparing intercepts

The Effect Tests report node is totally different from an LS Means report. The LS Means are not output in the Effect Tests report.

If you want to examine the estimated slopes and intercepts, I would do the following. On the Fit Model dialog, on the red triangle popup menu, uncheck the option Center Polynomials. That way when you fit a model, all the estimates are directly interpretable. Then fit the model and use the Save Columns >> Prediction Formula command. A new column gets created with the model saved as a prediction formula. You can examine and compare the parameters at that point.

You can also just use Fit Y by X and fit y vs x, and include the categorical variable as a By variable. Then fit each of the lines and look at the intercepts.

The dummy variable approach can be used to get at the slopes and intercepts. Again, uncheck Center Ploynomials for direct interpretation.

You can also have JMP predict Y for any value of x. Before running the model, include rows in the data table for those X's, but leave the Y cells blank for those rows. Then when you have JMP save the fitted model, it saves predicted values for those rows.

For the simple example I outlined before, these methods work great.

Re: Comparing intercepts

Jonathan

This was very helpful. Sorry for the mixup between the Effect Test and Effect Details report node.

So I did the uncheck of the Center Polynomials as you suggest. That changed the sum of squares and F ratio in the Effect Test table. The categorical variable went from being significant to be not significant. I presume this means that the intercepts are not significantly different. Looking at the prediction formula, they don't look different, so this seems to make sense. Am I correct in my interpretation?

If so it appears as if, in my case, at low values of x there is no significant difference among treatments in y. I used your suggestion to get some predicted y-values for various x values. Any idea on how I would test whether the y-values are different at a particular x-value (not zero and not at the mean x-value)? I have more than two categories (actually 8!).

Thanks again. This is really moving me forward in my interpretation. Dan

Re: Comparing intercepts

Dan,

I was surprised at first to hear that the tests are different for centered vs uncentered. But after talking to some others about it, it is true! The explanation is not simple, but the more accurate test is the one for the centered method. In your case, this means the intercepts are different. Although, statistical significance doesn't mean practical significance. Maybe the difference, or impact of that difference, is small enough to ignore.

About the differeing y values. The predicted y values are the same wherever the lines cross or intersect. So, if the fitted line for group 1 crosses the fitted line for group 2, then the predicted y values are the same at that point. There's not really any testing to do for that. You can get a sense for the prediction variation by using confidence intervals. Use Save Columns >> Mean Confidence Interval and Indiv Confidence Interval. The former gives a confidence interval for the average value at a given x. The latter gives a confidence interval for an individual realization of the response, and is wider than the former.

As always, exercise caution when extrapolating (predicting for an x value outside the range of the data). I try to stay away from it.

Re: Comparing intercepts

Jonathan

Thanks for the work you're doing on this. It's interesting because for the case where the centered method gave a sign difference and the uncentered didn't I would have said, looking at the intercepts that they weren't different. Interesting when I tried this for another set of data where I didn't think the intercepts were different both the uncentered and centered methods gave significant differences - so I wonder again if the uncentered is testing the intercepts and the centered is testing the covariate-weighted means. I get your point about statistically significant versus "really" significant.

I'm wondering about the following situation, which is what I think I have (mine is somewhat more complicated but I'll give a simpler example). Let's say I have two categories, A and B. Y-values increase with a covariate x. If the lines for A and B have the same y-intercept, but the slope for A is 1 and the slope for B is 2 an ANCOVA using FIT model may show that the covariate is significant, the treatments are different and the slopes are different. How would I know the intercepts are the same?

I'm a biologist studying the change in shape of mussels from different locations and how their shapes change as they grow. So for me the treatments are different locations, the covariate (x) is their length and the y-value is a shape variable (width/length - appropriately transformed). It appears as if the populations have young mussels starting out with the same shape (at small size they are the same width). As they grow in length they get relatively wider. However the rate at which they get relatively wider differs among the locations. If I can show the intercepts are the same, it would suggest that all of the populations start out pretty much the same and that there is an environmental component to the change in shape rather than a genetic component. So I'm trying to test whether there are differences in the shapes, adjusted for length among locations. I hope this helps to frame the question, better,

With regards to my second question - I do know how to get predictions for various x-values and the errors for these estimates. I could do a series of pairwise comparisons and adjust for the number of comparisons to test for significance. I was just wondering if there was an easier way to do this in JMP (like the SNK tests in the compare means procedure.)

Thanks again for all of your help with this.

Re: Comparing intercepts

You asked how you would know the intercepts are the same. To simulate the situation you presented (y vs x with 2 groups), I created several fake data sets for which I know the intercepts are the same. Using the centered approach, the conclusion is confusing. Using the uncentered approach, the conclusion agrees with the underlying structure (equal intercepts) of the data. The test leads to a conclusion of equal intercepts. So, I recommend using the uncentered approach for testing the intercepts.

I know it's confusing to understand what's going on with the test: Is it testing the intercepts or the means. If the grouping variable is set to nominal in JMP, then you are actually testing the means. If you use the dummy variable approach with all variable set as continuous in JMP, the dummy variable main effects are essentially testing equality of intercepts between groups. But the p-value for the two ways is the same; it's essentially the same test.

One more thing. The Effect Tests report is an overall test. If group is nominal, it is testing the following hypothesis: mu1=mu2=...=muk. If there is a difference anywhere, the test could be significant. It doesn't tell you where the difference is, that is a secondary analysis, something like a pairwise comparison. At the top of the Fit model report, you find leverage plots. The red triangle there has Tukey options for multiple comparisons.
If you use dummy variable approach with k levels, you have k-1 tests, each testing whether that groups intercept is equal to the intercept of group 1. You are not testing any of the other pairs, like 2 vs 3, etc. You could try the Estimates>>Custom Test command. This will allow you to compare groups, but it doesn't adjust for multiple comparisons. There's no way to do that in JMP for continuous variables.

Re: Comparing intercepts

Jonathan

This really helps. I will use the uncentered approach to test the intercepts.

I was aware of the Effects Test and have used the Tukey options available for the overall test. I appreciate the tip on using the dummy variable approach for multiple comparisons of intercepts. I suppose I could just use the intercepts and their error estimates from multiple linear regressions (one for each group) and do pairwise comparisons using a Bonferroni correction for multiple comparisons.

I really appreciate all of your help.

Dan

Re: Comparing intercepts

Solution as provided by Johnathan:

 

It is very easy to test the intercepts as well. You can do this in two ways:

METHOD 1:
For example, say I have a simple regression, meaning one Y and one X. Furthermore, say I have a categorical (nominal) variable called Group. This will be used to produce separate lines for each Group.

In Fit Model, put the following as the model:
X
Group
X*Group

In the output, look at the Effect Tests node. The test for Group is testing whether the intercepts are the same for the groups. The test for X*Group is testing whether the slopes are the same for the groups.


METHOD 2:
You can fit the same model by using dummy variables (1's and 0's). If the categorical variable has 3 levels, then you need 2 columns of dummy variables. The first column (call it Dummy2) has value of 1 where the grouping variable is equal to level 2, and 0 everywhere else. The second column (call it Dummy3) has value of 1 where the grouping variable is equal to level 3, and 0 everywhere else.

In Fit Model, put the following as the model:
X
Dummy2
Dummy3
X*Dummy2
X*Dummy3

Since we used dummy variables for levels 2 and 3 of the grouping variables, the Dummy2 and Dummy3 tests are testing whether levels 2 and 3 of the grouping variable have different intercepts than level 1. The X*Dummy2 and X*Dummy3 are testing the slopes.

The estimated paramters of this model will be different from the parameters of the first method, but the predicted values will be the same.

 


The Effect Tests report node is totally different from an LS Means report. The LS Means are not output in the Effect Tests report.

If you want to examine the estimated slopes and intercepts, I would do the following. On the Fit Model dialog, on the red triangle popup menu, uncheck the option Center Polynomials. That way when you fit a model, all the estimates are directly interpretable. Then fit the model and use the Save Columns >> Prediction Formula command. A new column gets created with the model saved as a prediction formula. You can examine and compare the parameters at that point.

You can also just use Fit Y by X and fit y vs x, and include the categorical variable as a By variable. Then fit each of the lines and look at the intercepts.

The dummy variable approach can be used to get at the slopes and intercepts. Again, uncheck Center Ploynomials for direct interpretation.

You can also have JMP predict Y for any value of x. Before running the model, include rows in the data table for those X's, but leave the Y cells blank for those rows. Then when you have JMP save the fitted model, it saves predicted values for those rows.

For the simple example I outlined before, these methods work great