cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Choose Language Hide Translation Bar
0 Kudos

Prediction Intervals for Predictive Models in the Profiler Using Bagging

What inspired this wish list request?

 

Models in the Predictive Modeling platform do not have a default capability to produce confidence intervals or prediction intervals around model predictions; however, JPM has provided a way to generate confidence intervals around the predictions in the profiler using bagging (https://www.jmp.com/support/help/en/18.0/#page/jmp/example-of-confidence-intervals-using-bagging.sht...). Unfortunately, there does not seem to be a way to produce prediction intervals in the profiler for predictive models using JMP (I am a JMP 17 user so I am unsure if this capability has been added, but based on the JMP 18 help entry I don't believe this capability has been added). I suspect it is feasible to produce these prediction intervals based on my understanding of how the confidence intervals are calculated.

 

 

What is the improvement you would like to see? 

 

With JMP 18's renewed its focus on prediction intervals in the profiler (https://community.jmp.com/t5/JMPer-Cable/Prediction-Profiler-enhancements-in-JMP-18/ba-p/718342), my hope is that prediction intervals for predictive models will be added. It would also be helpful to generate column formulas for confidence intervals and prediction intervals for predictive models.

 

Why is this idea important?

 

Prediction intervals are useful to judge whether the data generated by the proposed model looks like the actual data, often called a "posterior predictive check" in the Bayesian literature. If the data generated by the model does not resemble the actual data, then it may be desirable to consider a different model. Please see a description of this concept, along with a simple example, from John Kruschke's book "Doing Bayesian Data Analysis" below:

 

"The fifth step is to check that the model, with its most credible parameter values, actually mimics the data reasonably well. This is called a posterior predictive check... One approach is to plot a summary of predicted data from the model against the actual data... The predicted weight values are summarized by vertical bars that show the range of the 95% most credible predicted weight values. The dot at the middle of each bar shows the mean of the predicted weight values. By visual inspection of the graph, we can see that the actual data appear to be well described by the predicted data. The actual data do not appear to deviate systematically from the trend or band predicted from the model. If the actual data did appear to deviate systematically from the predicted form, then we could contemplate alternative descriptive models. For example, the actual data might appear to have a nonlinear trend. In that case, we could expand the model to include nonlinear trends... We could also examine the distributional properties of the data. For example, if the data appear to have outliers relative to what is predicted by a normal distribution, we could change the model to use a heavy-tailed distribution..."

 

mminor_1-1726162238698.png

 

 

 

1 Comment
mia_stephens
Staff
Status changed to: Acknowledged

Thank you for submitting this request and for providing the details @mminor.  Yes, we've added prediction intervals in many JMP platforms, but not in many of the predictive modeling platforms. Have shared this request with development for consideration.