Hi @Victor_G ,
Sorry if it wasn't clear, but we are actually referring to the same thing. When I mentioned models before, I am talking about generating models using the same test/validation setup, but also across different platforms in JMP. I never build models with any one platform in mind, but generate and tune models using the different platforms (e.g. NN, BT, Bootstrap Forest, SVM, KNN, and XGBoost) and crossvalidation with the training and validation sets. Then, after I've generated the different models I compare their ability to predict on the holdout (test) data set, a data set that none of the models have seen during their training and validation steps. Sometimes the NN works best, and sometimes it's XGBoost, or some other platform.
Lastly @sreekumarp , one thing I forgot to mention in my previous post, is that it really helps to stratify your crossvalidation column using your output column -- the column your trying to predict. This keeps a similar distribution structure for your training, validation, and test data sets. Sometimes this can't be done, especially with highly imbalanced data sets, but if possible, I highly recommend it. If not, the data is randomly placed in each type of data set, and this might lead to a poorly divided validation column, which can often lead to poor models.
DS