Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- JMP User Community
- :
- About the JMP User Community
- :
- Community Discussions
- :
- K-fold Cross-validation for Neural Networks

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

K-fold Cross-validation for Neural Networks

Jun 19, 2020 11:59 PM
(701 views)

My question concerns the use of K-fold cross validation for artificial neural networks (NN). Specifically, I want to know where the final NN model parameters come from? Were they obtained by a fit to the entire data set? Or, were they from a fit to one of the K data sets consisting of K-1 folds that were used for training? If so, how was the best traing data set chosen? The SAS documentation is not clear on this issue. Does anyone have an answer?

2 REPLIES 2

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: K-fold Cross-validation for Neural Networks

My understanding is that the parameter from the best of the K models is used.

This is from the help documentation on the Generalized Regression platform but I expect the same to apply to the Neural platform: https://www.jmp.com/support/help/en/15.1/#page/jmp/validation-method-options.shtml

"For each value of the tuning parameter, the following steps are conducted:

–The observations are partitioned into k subsets, or folds.

–In turn, each fold is used as a validation set. A model is fit to the observations not in the fold. The log-likelihood based on that model is calculated for the observations in the fold, providing a validation log-likelihood.

–The mean of the validation log-likelihoods for the k folds is calculated. This value serves as a validation log-likelihood for the value of the tuning parameter.

The value of the tuning parameter that has the maximum validation log-likelihood is used to construct the final solution. To obtain the final model, all k models derived for the optimal value of the tuning parameter are fit to the entire data set. Of these, the model that has the highest validation log-likelihood is selected as the final model. The training set used for that final model is designated as the Training set and the holdout fold for that model is the Validation set. These are the Training and Validation sets used in plots and in the reported results for the final solution."

Highlighted
##

So, it seems that the final model parameters are those that are fit to the "final model Training set," which is the training set that corresponds to the "Validation set" that produces the best log-likelihood. Makes sense, but the description confuses me. Thanks for your response.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: K-fold Cross-validation for Neural Networks

Article Labels

There are no labels assigned to this post.