Hi @34South,
Do you have JMP Pro or access to JMP Pro? I ask because the partition platform isn't the best (most stable) approach to generate prediction formulas. The Boosted Tree and Boostrap Forest platforms are much better. Or, if you have access to higher versions, you can also run the XGBoost platform, which is very stable.
But, on to your question about min size split. The min size splits controls complexity of the tree "stump". If the number of observations in a split is less than the value that you set, there is no split generated. It is not too relevant when you use boosting in your decision tree, but can be used to control it. You don't want it too small (i.e. 1 observation) and also don't want it too big (e.g. 1000 observations). A good typical staring point would be around 5 for a simple tree method. If you're using something like the bootstrap forest platform, you'll want to use N/2 for classification schemes like you're using -- where N is the number of observations. You might be able to tune your model by adjusting this parameter and then comparing it the different models on a test data set.
If you have enough data, I'd recommend splitting off a portion, maybe 20-25% and keeping that as a test data set that isn't used to train or validate your model. With the remaining data, you can generate a stratified validation column with training and validation data (say 80/20) to optimize the complexity of your model while also keeping it from overfitting. After generating several models, you can use the previous test data as a way to compare which model is actually better at predicting the outcomes correctly. Since you're looking for a binary decision model, you can even optimize it further by looking at cost sensitive learning.
I'm not an expert in how JMP calculates the log worth or Fischer's exact test, so hopefully someone from JMP can answer that, but I'm guessing that it has to do with the whole data set and calculating log worth versus when you save the split formula and do a contingency table.
Hope this helps -- or at least gives some things to think about.
Happy modeling!,
DS