cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar

Interpreting Regression Analysis Results

Started ‎06-10-2020 by
Modified ‎12-03-2021 by
View Fullscreen Exit Fullscreen

Learn more in our free online course:
Statistical Thinking for Industrial Problem Solving

 

In this video, we again use the Cleaning data to fit a regression model for Removal and ID using Fit Y by X. We'll discuss the statistical output provided, and will see how to make predictions using our regression equation. 

 

We'll again use Removal as the Y, Response and ID as the X, Factor.

 

We'll select Fit Line from the red triangle to fit the regression model.

 

In a previous video we conducted a residual analysis and verified that our model assumptions were not violated.

 

Now, let's take a look at the default statistical output.

 

The summary statistics for the model fit, including root mean square error and RSquare, are reported in the Summary of Fit table. Our root mean square error is 1.7, and RSquare is 0.64.

 

Thus, ID explains 64% of the variation in Removal.

 

The overall test results for the significance of the model are reported in the ANOVA table. The p-value is very small, so we can conclude that the model is significant.

 

As we saw earlier, the fitted linear model is reported under linear fit.

 

The estimated intercept and slope coefficients for this model are reported in the Parameter Estimates table under Estimate.

 

We might be interested in using the model to predict the average Removal for different values of ID. As we have seen, we can simply plug a value of ID into the equation to calculate the predicted Removal.

 

There are also a number of ways to make predictions directly in JMP.

 

We can select Save Predicteds from the red triangle for Linear Fit to save the formula for the model to the data table.

 

This computes the predicted Removal for all of our observations. We can add a new row, and enter a value for ID, and JMP will predict the average removal for this value.

 

Keep in mind that it only makes sense to enter values of the predictor within the range used to build the model. The model may not predict well if we extrapolate beyond the range of our predictor values.

 

 

Let's return to the analysis window.

 

If we're only interested in approximating the mean Removal, we can use the cross-hair tool from our toolbar. When we click on the line, the cross-hair tool will display the predicted Removal for a given value of ID.

 

We can also display two types of intervals on our Bivariate Fit Plot: Confidence Intervals and Prediction Intervals.

 

The Confidence Curves Fit option from Linear Fit displays confidence bands. These bands represent confidence intervals for the mean Removal for a given value of ID.

 

For example, the confidence interval for the mean Removal for parts with ID of 10 is approximately 10.5 to 11.5 units.

 

The Confidence Curves Indiv (Individual) option displays prediction bands. These bands represent prediction intervals for individual values of Removal for given values of ID

 

For example, the predicted range of values for parts with an ID of 10 is approximately 7.5 to 14.5.