cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Browse apps to extend the software in the new JMP Marketplace
Choose Language Hide Translation Bar
StefanC
Level III

Testing predictive capability of model in normal JMP

Hi, I have developed a model based on a training data set and would like to test the predictive capability of the model on a test data set. In there any way I can do this in normal JMP? My model consists of both continuos and categorical effects. I guess I would have to be able to fix the parameters of the model. Or maybe this is another good reason to get JMP Pro?

1 ACCEPTED SOLUTION

Accepted Solutions
P_Bartell
Level VIII

Re: Testing predictive capability of model in normal JMP

One question for you...hopefully your randomly selected the observations to include in your 'training' set? If not, I'd backtrack and start with that. Assuming you did...

 

Well, yes, if you want to evaluate models using a training/validate/test construct, which is generally a best practice for predictive modeling work,,,then JMP Pro is the way to go. Lots of very useful capabilities like the Model Comparison and Formula Depot platforms that are aimed specifically at very efficient multiple model evaluation.

 

But, lacking JMP Pro, here's two simple ideas for you to consider. Save the model you've built with your training data set to a new column formula calling it 'Predicted Values' or some such construct, in a data table containing your 'test' data set. You should now have predictions of the response in the column. Now plot a Fit Y by X plot of the 'Predicted Values' and the actual values for the responses and examine for 'goodness of fit' of the two sets of responses. I'd force equal x and y axis limits on the chart to all but replicate the Fit Model platform report actual vs. predicted plot. Use your eye to find that 45 degree line...or just overlay one in Graph Builder.

 

Another variation would be to subtract the Predicted from the Actual values and plot the residuals in a histogram format.

 

These techniques won't do anything to evaluate over fitting...but if your emphasis is strictly prediction...well it's a start.  

View solution in original post

6 REPLIES 6
P_Bartell
Level VIII

Re: Testing predictive capability of model in normal JMP

One question for you...hopefully your randomly selected the observations to include in your 'training' set? If not, I'd backtrack and start with that. Assuming you did...

 

Well, yes, if you want to evaluate models using a training/validate/test construct, which is generally a best practice for predictive modeling work,,,then JMP Pro is the way to go. Lots of very useful capabilities like the Model Comparison and Formula Depot platforms that are aimed specifically at very efficient multiple model evaluation.

 

But, lacking JMP Pro, here's two simple ideas for you to consider. Save the model you've built with your training data set to a new column formula calling it 'Predicted Values' or some such construct, in a data table containing your 'test' data set. You should now have predictions of the response in the column. Now plot a Fit Y by X plot of the 'Predicted Values' and the actual values for the responses and examine for 'goodness of fit' of the two sets of responses. I'd force equal x and y axis limits on the chart to all but replicate the Fit Model platform report actual vs. predicted plot. Use your eye to find that 45 degree line...or just overlay one in Graph Builder.

 

Another variation would be to subtract the Predicted from the Actual values and plot the residuals in a histogram format.

 

These techniques won't do anything to evaluate over fitting...but if your emphasis is strictly prediction...well it's a start.  

StefanC
Level III

Re: Testing predictive capability of model in normal JMP

Thank you.  

 

My training data set consists of 95 observations and I applied stepwise regression with k-fold selection. Then I reduced the model further with ordinary least squares applying a 5% significance limit. The idea was to use a new set of observations (from a later time period) as test data set meaning that data is not randomly assigned to training and test data set. Do you think there is a problem with this procedure?

 

Applying your procedure I got a very poor R-squared for the test data test. I think the problem is that my test data set is too small (it has only 20 observations), also the values of the explanatory variables have a smaller range than on the training data set. I find that it is not really possible to evaluate the predictive capability of my model with this test data set. 

 

 

P_Bartell
Level VIII

Re: Testing predictive capability of model in normal JMP

When you say 'later period'...Where I was coming from wrt to random selection of observations to place in the training set was based on my premise that essentially the entire collection of data was representative of a single population of study. And you were willing to assume this wrt to model evaluation. Now when you say, 'later time'...how willing are you to assume this single population idea is true? Granted you have a small number of observations in both data sets...but using K fold is a form of cross validation that can help with a relatively small number of observations for creating and evaluating a model for overfitting.

 

IMO, even with a small number of observations in your test data set...if you test set isn't fitting the predictions/actual results well, then that's telling you something...that your model is NOT particularly effective at predicting future performance. So in a sense, your test set isn't failing you...it's screaming at you to proceed cautiously with the adoption of the training model as gospel.

 

In addition, you haven't shared any of the training model diagnostics...maybe there are some issues with the model that would make fitting future observations problematic. Can you share the data or diagnostics? There may be some clues hidden in these results. 

statman
Super User

Re: Testing predictive capability of model in normal JMP

Just to add to @P_Bartell's comments...If your original data set is not representative of future conditions, then the ability of a model that is  created from said data set is not very useful for prediction.  This doesn't mean you didn't learn anything, but you'll need to expand your inference space to improve the predictability of the model (and the model may need to be modified over time).

"All models are wrong, some are useful" G.E.P. Box
StefanC
Level III

Re: Testing predictive capability of model in normal JMP

I agree, at least one of the variables in my model follows a completely different distribution in the training data set than in the test data set (see attached file with test=0 equal to training data and test=1 equal to test data). A trend analysis reveals that the level of this variable has shifted upwards. I am not sure why but this is something to investigate. 

David_Burnham
Super User (Alumni)

Re: Testing predictive capability of model in normal JMP

Which platform has been used to create the model?

-Dave