Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- Discussions
- :
- Test for homo/heteroscedacity in JMP

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 12, 2013 1:24 PM
(5399 views)

Hi, I'm trying to check for homo- or heteroscedasticity in a least squares regression model. I can subjectively examine the residuals, but I'd prefer to additionally report a p-value, such as one generated by the White test or the Breusch–Pagan test. I cannot find documentation of how to do this in the JMP help or documentation, or on the web. Should I save the residuals, square them, and then run a new regression with the same predictors as the original, but with the outcome as the squared residuals? Would the Breusch–Pagan p-value be the significance of the omnibus F in that model? This is the closest I've found: http://www.augustana.edu/users/ecmarme/using%20JMP%20to%20check%20model%20for%20heteroskedasticity.p...

Solved! Go to Solution.

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 11:01 AM
(8464 views)

Solution

Hi Kayle,

I think JMP doesn't have it built in but you can do it "manually" by using the R2 from the auxiliary regression (multiplied by the number of parameters) and comparing it with the appropriate chi2 value.

if you need to do it frequently just script your own process.

8 REPLIES

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 8:32 AM
(4232 views)

hi Kayle,

take a look at this link:

Regression Diagnostics using JMP - Equality of Error Variance - YouTube

perhaps it can give you a starting point.

ron

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 10:16 AM
(4232 views)

Hi Ron,

Thanks for the link. The video is helpful. Julian Parris suggests using a median split for predicted, and examining whether the residuals in the upper half have different variability than those in the lower half (using Levene, Brown-Forsythe, Bartlett, or another categorical equality of variability test). However, collapsing continuous data into categories is not ideal. Is it possible to calculate the White test or the Breusch-Pagan test in JMP? Would squaring the residuals and correlating them against predicteds yield p-values for one of these tests?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 11:01 AM
(8465 views)

Hi Kayle,

I think JMP doesn't have it built in but you can do it "manually" by using the R2 from the auxiliary regression (multiplied by the number of parameters) and comparing it with the appropriate chi2 value.

if you need to do it frequently just script your own process.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 2:52 PM
(4232 views)

Hi Ron, I'm having trouble specifying the auxiliary regression and obtaining the chi-square p-value. These are the steps I'm doing:

1. Fit Model of interest.

2. Save residuals.

3. Square residuals.

4. Square original predictors.

5. Fit Model, put squared residuals in Y, and add these Model Effects:

A. The original predictors

B. The predictors squared (using the red triangle next to transform)

C. All possible pairwise crosses of the predictors (using the Macro "factorial to degree", with 2 selected below)

6. Use the r-squared to determine the p-value under a chi-square distribution.

Here is an example with JSL:

```
dt = Open("$Sample_Data/Fitness.jmp");
fm = Fit Model(
Y( :Runtime ),
Effects( :Age, :Weight ),
Personality( Standard Least Squares ),
Emphasis( Effect Leverage ),
Run(
:Runtime << {Lack of Fit( 0 ), Plot Actual by Predicted( 1 ),
Plot Regression( 0 ), Plot Residual by Predicted( 1 ),
Plot Effect Leverage( 1 ), Show VIF( 1 )}
),
);
fm << residuals;
dt << New Column ("res2",formula(:Residual Runtime^2));
dt << New Column ("age2",formula(:Age^2));
dt << New Column ("weight2",formula(:Weight^2));
```

However, at this point I'm stuck. After a lot of reading online and textbooks, I think I'll forgo a hypothesis test and just stick with subjective visual inspection to confirm homoscedasticity for all my analyses. Thanks for your help.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 4:04 PM
(4232 views)

at that point i assume you ran the following script:

```
Fit Model(
Y( :Residual Runtime ),
Effects(
:Age,
:Weight,
:age2,
:weight2,
:Age * :Weight,
:Age * :age2,
:Age * :weight2,
:Weight * :age2,
:Weight * :weight2,
:age2 * :weight2
),
Personality( Standard Least Squares ),
Emphasis( Effect Screening ),
Run(
Profiler(
1,
Confidence Intervals( 1 ),
Term Value(
Age( 47.677 ),
Weight( 77.445 ),
age2( 2299.9 ),
weight2( 6064.8 )
)
),
:Residual Runtime << {Lack of Fit( 0 ), Sorted Estimates( 1 ),
Plot Actual by Predicted( 1 ), Plot Regression( 0 ),
Plot Residual by Predicted( 0 ), Plot Effect Leverage( 0 )}
)
);
```

then you got R2 of 0.377547

at this stage I would just take that number multiply it by sample size 31 and get LM=n*R2, LM=31*0.377547=11.70396.

the Pvalue of this number can be obtained with the following script in jmp:

```
New Table( "Pvalue",
Add Rows( 1 ),
New Column( "P>chiSq", Numeric, Continuous, Format( "Best", 12 ), Formula( 1 - ChiSquare Distribution( 11.703957, 10 ) ))
);
```

this will give you the value of 0.3 so you can be quite sure the original model didn't suffer from heteroscedasticity.

hope it helped.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 14, 2013 5:21 PM
(4232 views)

That is very helpful, thank you. One quick question, just to doublecheck: the model you created was predicting the residuals, not the squared residuals, is that correct? I read that the White test uses the squared residuals. If I understand correctly, in my example it would be this:

Fit Model(

Y( :res2 ),

Effects(

:Age,

etc.

This would yield R2 of 0.222979. So, LM=31*0.222979=6.912349. The ChiSquare Distribution (6.912349, 10) yields a p-value of 0.7, which is obviously still not below a reasonable cutoff like 0.05.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 15, 2013 3:52 AM
(4232 views)

Hi Kayle

according to this link: White test - Wikipedia, the free encyclopedia you need to use the residuals themselves (as i did).

according to this link: Breusch–Pagan test - Wikipedia, the free encyclopedia the squared residuals are used.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Nov 15, 2013 6:33 AM
(4232 views)

For a simple model I compared *p* derived from F-test when fitting the squared residuals, the corresponding Chi2 *p* from Rons formula and the Breusch-Pagan *p* from R function bptest.

R and JMP return identical *p*-values (Chi2) and the F-test *p *is not much different either.

Here is the code I used:

```
dt = Open( "$SAMPLE_DATA/Big Class.jmp" );
// Fit original model
nc = N Col( dt );
dt << Fit Model( Y( :weight ), Effects( :height ), Personality( Standard Least Squares ), Run( residuals ) );
//Fit squared residuals
dt << New Column( "res2", values( (Column( nc + 1 ) << getvalues) ^ 2 ) );
fm = dt << Fit Model( Y( :res2 ), Effects( :height ), Personality( Standard Least Squares ), Run( residuals ) );
//Extract numbers from report
rfm = Report( fm );
R2 = rfm[Outline Box( "Summary of Fit" )][1][2][1];
n = rfm[Outline Box( "Summary of Fit" )][1][2][5];
df = rfm[Outline Box( "Analysis of Variance" )][1][2][1];
// p-value fron F-test (Anova)
Ftest_p = rfm[Outline Box( "Analysis of Variance" )][1][5][2][1];
// Breusch-Pagan p-value with JMP
bp_p_JMP = 1 - ChiSquare Distribution( n * R2, df );
// Breusch–Pagan test with R (Package lmtest must be installed)
R Init();
R Send( dt );
R Submit( "Data <- dt
names(Data)
attach(Data)
require(lmtest)
p <- bptest(weight ~ height)$p.value" );
bp_p_R = R Get( p );
R Term();
// Compare results
Show( Round( {Ftest_p, bp_p_JMP, bp_p_R}, 6 ) );
```

Results in Log window:

Round({Ftest_p, bp_p_JMP, bp_p_R}, 6) = {0.152976, 0.145395, 0.145395};