Hi @ih,
From my side, using "Model screening" and the setup you proposed does work for K-folds crossvalidation, but not always for Leave-One-Out method (depend on dataset).
If I specify K = number of observations, an error message appears "The validation sets inside each of the folds are too small to support some methods", even if only Bootstrap Forests is checked in the "Method" panel. So I also thought about using the "Model screening platform" but it may not be possible, depending on the dataset (on "Boston housingprices" dataset with this method I have no summary of the folds and missing values in each folds details). It does work for the "Big Class" dataset.
Hi @tuo88138
You can follow the method described by Chris Gotwalt, I just tested it and it worked perfectly. You might have to uncheck "Early Stopping" in the Bootstrap Forest analysis panel, in order to avoid "blank values" for the different metrics in the output (Rsquare, RASE, etc...).
This technique can be interesting to compare contribution importance of variables across several simulations (see capture "Contribution-importance_simulations" attached), or provide confidence intervals on some metrics like Rsquare for example (see capture "Confidence_Intervals_Rsquare" for Training Rsquare on same dataset with same number of simulations).
Hope this answer will help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)