I'm using PLS, along with several other techniques to predict {0, 1} outcomes. Logistic regression will save the prediction formulas as expected. So will LASSO and other regularization techniques. These columns are always in the range of (0, 1). When I use PLS I can save the linear predictor column (Pred Formula). Because JMP treats the {0, 1} data as continuous we get linear predictors (call these the LP's) that lie outside the interval (0, 1). That is, some are negative and some are > 1. My first inclination was to do exactly what I do for the other models and put the linear predictor into the logit function to arrive at a probability score. If we do this for PLS we will always have our probability scores squeezed into a small region around .5. Suppose the dynamic range of The dynamic range of the LP's is [LP_Min, LP_Max]. The the probabilities will lie in the range

[ 1/(1 + exp(-LP_Min)), 1/(1 + exp(-LP_Max)).

When comparing the prediction results from, say, LASSO and PLS our customer immediately draws the conclusion that LASSO is better because it appears to better distinguish 0's and 1's simply because the probability scores have a larger dynamic range.

So finally the question. Should we stick with the LP's and maybe censor the negative and >1 values to 0 and 1 respectively. Or should we sca;e the LP's so that they have the same dynamic range. I appreciate that what matters is really the rank statistics, but it is worth knowing that when you try to compare PLS with other logistic methods one (me, in particular) might be confused at first about why PLS has such a shrunken dynamic range.