@Lu : When I spoke of stratifying the response across the levels of the categorical response I was not referring to the % splits for the size of the training, validation, test data sets. What I'm recommending is when you select the response column as the stratification column, then under the 'Select Method' window, select "Stratified Validation Column". This forces as close as you can get to an equal proportion of each level of the categorical response within the training, validation, and optionally, test sets. Sometimes the gremlins of pure random selection will create a significant IMBALANCE in the levels across the response making model validation potentially more difficult. Here's some more documentation:
https://www.jmp.com/support/help/en/15.2/#page/jmp/launch-the-make-validation-column-platform.shtml
As for partial least squares, advantages, well I'm not sure there are any clear cut advantages. But one thing PLS is VERY good at, is creating latent variables for your multicollinear predictors, that can in turn be used for modeling purposes. So it's just a different mathematical treatment of the predictors compared to Lasso, Elastic Net, or any of the tree based methods. See the JMP documentation for this approach:
https://www.jmp.com/support/help/en/15.2/#page/jmp/partial-least-squares-models.shtml
My general recommendation is try many different modeling methods...both the linear and non linear type, including those mentioned by @Mark_Bailey , and then export each model's results to either the Formula Depot or Model Comparison platforms and see which model works best.
There isn't any one method that fits all problems. So try many and hope you can find something that works.