Hi @Wang123,
Welcome in the Community !
Have you read the documentation about Model Screening ? To answer your question directly, here is the answer in the JMP Help : "The modeling platforms use default options and tuning parameters in model fitting. You can try to improve the fit past what the default yields by calling platforms directly and choosing different options."
So by default, you can't tune hyperparameters in Model Screening platform (perhaps by JSL), and it's not the use of this platform. Model Screening is AutoML, where the aim is to fit quickly a high number of very different types of Machine Learning models, to better understand which type(s) of models does seem to correctly fit the data. Once this exploration phase is done, you can use the corresponding model's platforms to tune the hyperparameters with tuning design tables and possibly improve further your model's performances.
Note that Machine Learning algorithms have various sensitivity/robustness regarding hyperparameters configurations : for example, Random Forests are one of the most robust ML algorithm, where hyperparameters have little influence on its performances. See [1802.09596] Tunability: Importance of Hyperparameters of Machine Learning Algorithms (arxiv.org)
You may technically be able to do what you intend to do, but several warnings to take care :
- You may have to use nested cross-validation (if already using cross-validation) or cross-validation on your training set (and using the validation set to compare the models' performances) if you want to fit several models and tune them. The added data splitting is here to make sure that you're not overfitting with the hyperparameters optimization, and that a part of your data can still be used to do algorithms comparison and selection in a fair and "as unbiased as possible" way.
- A possible problem with this Model Screening platform is that the crossvalidation option is by default random. Depending on your dataset and the distributions of your features/inputs, this may not be a sensible choice and would not provide a fair assessment and representativity of your dataset. A workaround could be to create a stratified formula (K-folds) validation column and use it in the validation panel of the model screening : Launch the Make Validation Column Platform (jmp.com)
- If you really want to try and tune several models with this platform or any AutoML platform, think about the number of models that need to be fitted and the time and computations needed : you'll need to fit each selected type of model KxK' times (nested cross-validation with K = number of outer folds for model validation, K' = number of inner folds for hyperparameters tuning). Depending on your dataset, its dimensionality and complexity, this "brute-force" approach may not be reasonable and achievable.
More ressources about nested cross-validation :
https://inria.github.io/scikit-learn-mooc/python_scripts/cross_validation_nested.html
https://scikit-learn.org/stable/auto_examples/model_selection/plot_nested_cross_validation_iris.html
https://medium.com/@cd_24/a-guide-to-nested-cross-validation-with-code-step-by-step-6a8ad06d5af2#:~:....
My advice would be to proceed sequentially and use the platform for what it is best at, comparing very diverse algorithms and select the most promising ones. Then, you can optimize the hyperparameters on the (few) models that seem to be best performing in order to improve a little bit the performances. Be careful in this process to avoid data leakage, so never use your test set until a final model is chosen and optimized for example (or you can expect optimistic results on your test set and bad surprises when the algorithm will be used on new data).
This process makes more sense to me than using a brute-force approach, as the hyperparameters optimization won't "magically" make an algorithm learn much better than a more suitable algorithm/method.
I hope this answer (even if not directly what you would expect) will help you,
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)