Choose Language Hide Translation Bar
Adele
Occasional Contributor

Re: Neural nets_validation sets get higher model fits than training sets

Thank you for your suggestions. We tried k-fold, and it works better in our case. However, new problems come out: how could we do k-fold cross-validation for bootstrap forests?

Thanks.

0 Kudos

Re: Neural nets_validation sets get higher model fits than training sets

You cannot use K-fold cross-validation with Bootstrap Forest model selection in JMP. You can only use hold-out sets.

 

Remember that most predictive model types are intended for large data sets. You might do as well with multiple regression in your case if the data meet the model assumptions.

Learn it once, use it forever!
0 Kudos
Adele
Occasional Contributor

Re: Neural nets_validation sets get higher model fits than training sets

Thank you. Yes, I only found the scripts below. But I could not combine it with the bootstrap forest model.

 

obj = Open( "$SAMPLE_DATA/Car Poll.jmp" ) << Partition(
	Y( :country ),
	X( :sex, :marital status, :age, :type, :size )
);

obj << K Fold Crossvalidation( 5 );

obj << Go;

 

 

Thank you so much for your helping.

0 Kudos
Adele
Occasional Contributor

Re: Neural nets_validation sets get higher model fits than training sets

Dear Mr. Bailey,

    Thanks. I have one more question:  can we use K-fold cross-validation with standard least squares regression in JMP?

0 Kudos

Re: Neural nets_validation sets get higher model fits than training sets

No, the Fit Least Squares platform in JMP does not provide any cross-validation. The Generalized Regression platform in JMP Pro, which includes the standard linear regression model, provides both hold-out cross-validation in the launch dialog box and K-fold cross-validation as a platform validation option.

 

You might consider using AICc for model selection in place of cross-validation because of the small sample size.

 

You might also consider using open source software if it provides the flexibility you demand. You can connect to R or Python, for example, with JMP and work with both software.

Learn it once, use it forever!

View solution in original post

Re: Neural nets_validation sets get higher model fits than training sets

JMP Pro does offer k-fold for regression analysis under the Stepwise Platform. It is used to determine which terms to keep in the model. From the JMP Documentation:

 

K-Fold Crossvalidation (Available only for continuous responses.) Performs K-Fold cross validation in the selection process. When selected, this option enables the Max K-Fold RSquare stopping rule (“Stepwise Regression Control Panel” on page 250). For more
information about validation, see “Using Validation” on page 275.

Dan Obermiller
Highlighted

Re: Neural nets_validation sets get higher model fits than training sets

I forgot to mention that K-fold cross-validation is specifically for cases with small sample sizes. Did you try this method instead of a validation hold out set?

Learn it once, use it forever!

View solution in original post

Adele
Occasional Contributor

Re: Neural nets_validation sets get higher model fits than training sets

Dear Mr. Bailey,
    As recommended, I used k-fold cross-validation with neural nets and stepwise regression (enter all). I wrote a loop and ran each analysis for 100 times. The results show that the mean R squares for the validation sets are not higher than that of the training sets.
    Now I am wondering why K-fold cross-validation works better with small sample sizes and why holdout validation with neural nets will get awkward result, i.e., validation sets get higher mean R squares. What is the theoretical or the mathematical reason?
    Thank you. Sorry for bothering you again.

0 Kudos

Re: Neural nets_validation sets get higher model fits than training sets

You are not bothering anyone here! Questions like yours are one of the main reasons for having the Discussions area in the JMP Community! On the other hand, some discussions might be beyond the scope or capability of this format.

 

NOTE: we are discussing predictive models, not explanatory models. As such, we only care about the ability of the model to predict the future. The future is unavailable while training the model and selecting a model.

 

Cross-validation of any kind is not the result of a theory. It is a solution to the problem of how to perform an honest assessment of the performance of models before and after selecting one model. The hold out set approach was developed first for cases of large data sets. Inference is not helpful to model selection in such cases. The future is not available. What to do? Set aside, or hold out, a portion of the data (validation). These data are not available to the training procedure. Then apply the fitted model to the hold out set to obtain model predictions. How did the model do? How do other models do?

 

Since the validation data was used to select the model, some modelers hold out a third portion (test) to evaluate the selected model.

 

Cross-validation is a very good solution but relies on the large size of the original set of observations. What to do when the sample size is small? The K-fold approach was developed specifically for such cases. Some modelers take it to the extreme and set K to the sample size. This version is known as 'leave one out.' It works better than hold out sets in such cases because you are using the data more efficiently.

Learn it once, use it forever!

View solution in original post