Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- JMP User Community
- :
- Discussions
- :
- Testing predictive capability of model in normal JMP

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Aug 21, 2020 5:53 AM
(310 views)

Hi, I have developed a model based on a training data set and would like to test the predictive capability of the model on a test data set. In there any way I can do this in normal JMP? My model consists of both continuos and categorical effects. I guess I would have to be able to fix the parameters of the model. Or maybe this is another good reason to get JMP Pro?

1 ACCEPTED SOLUTION

Accepted Solutions

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

One question for you...hopefully your randomly selected the observations to include in your 'training' set? If not, I'd backtrack and start with that. Assuming you did...

Well, yes, if you want to evaluate models using a training/validate/test construct, which is generally a best practice for predictive modeling work,,,then JMP Pro is the way to go. Lots of very useful capabilities like the Model Comparison and Formula Depot platforms that are aimed specifically at very efficient multiple model evaluation.

But, lacking JMP Pro, here's two simple ideas for you to consider. Save the model you've built with your training data set to a new column formula calling it 'Predicted Values' or some such construct, in a data table containing your 'test' data set. You should now have predictions of the response in the column. Now plot a Fit Y by X plot of the 'Predicted Values' and the actual values for the responses and examine for 'goodness of fit' of the two sets of responses. I'd force equal x and y axis limits on the chart to all but replicate the Fit Model platform report actual vs. predicted plot. Use your eye to find that 45 degree line...or just overlay one in Graph Builder.

Another variation would be to subtract the Predicted from the Actual values and plot the residuals in a histogram format.

These techniques won't do anything to evaluate over fitting...but if your emphasis is strictly prediction...well it's a start.

6 REPLIES 6

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

One question for you...hopefully your randomly selected the observations to include in your 'training' set? If not, I'd backtrack and start with that. Assuming you did...

Well, yes, if you want to evaluate models using a training/validate/test construct, which is generally a best practice for predictive modeling work,,,then JMP Pro is the way to go. Lots of very useful capabilities like the Model Comparison and Formula Depot platforms that are aimed specifically at very efficient multiple model evaluation.

But, lacking JMP Pro, here's two simple ideas for you to consider. Save the model you've built with your training data set to a new column formula calling it 'Predicted Values' or some such construct, in a data table containing your 'test' data set. You should now have predictions of the response in the column. Now plot a Fit Y by X plot of the 'Predicted Values' and the actual values for the responses and examine for 'goodness of fit' of the two sets of responses. I'd force equal x and y axis limits on the chart to all but replicate the Fit Model platform report actual vs. predicted plot. Use your eye to find that 45 degree line...or just overlay one in Graph Builder.

Another variation would be to subtract the Predicted from the Actual values and plot the residuals in a histogram format.

These techniques won't do anything to evaluate over fitting...but if your emphasis is strictly prediction...well it's a start.

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Testing predictive capability of model in normal JMP

Thank you.

My training data set consists of 95 observations and I applied stepwise regression with k-fold selection. Then I reduced the model further with ordinary least squares applying a 5% significance limit. The idea was to use a new set of observations (from a later time period) as test data set meaning that data is not randomly assigned to training and test data set. Do you think there is a problem with this procedure?

Applying your procedure I got a very poor R-squared for the test data test. I think the problem is that my test data set is too small (it has only 20 observations), also the values of the explanatory variables have a smaller range than on the training data set. I find that it is not really possible to evaluate the predictive capability of my model with this test data set.

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Testing predictive capability of model in normal JMP

When you say 'later period'...Where I was coming from wrt to random selection of observations to place in the training set was based on my premise that essentially the entire collection of data was representative of a single population of study. And you were willing to assume this wrt to model evaluation. Now when you say, 'later time'...how willing are you to assume this single population idea is true? Granted you have a small number of observations in both data sets...but using K fold is a form of cross validation that can help with a relatively small number of observations for creating and evaluating a model for overfitting.

IMO, even with a small number of observations in your test data set...if you test set isn't fitting the predictions/actual results well, then that's telling you something...that your model is NOT particularly effective at predicting future performance. So in a sense, your test set isn't failing you...it's screaming at you to proceed cautiously with the adoption of the training model as gospel.

In addition, you haven't shared any of the training model diagnostics...maybe there are some issues with the model that would make fitting future observations problematic. Can you share the data or diagnostics? There may be some clues hidden in these results.

Highlighted
##

Just to add to @P_Bartell's comments...If your original data set is not representative of future conditions, then the ability of a model that is created from said data set is not very useful for prediction. This doesn't mean you didn't learn anything, but you'll need to expand your inference space to improve the predictability of the model (and the model may need to be modified over time).

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Testing predictive capability of model in normal JMP

Highlighted
##

I agree, at least one of the variables in my model follows a completely different distribution in the training data set than in the test data set (see attached file with test=0 equal to training data and test=1 equal to test data). A trend analysis reveals that the level of this variable has shifted upwards. I am not sure why but this is something to investigate.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Testing predictive capability of model in normal JMP

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Testing predictive capability of model in normal JMP

Which platform has been used to create the model?

-Dave