Here's my limited experience.
Modeling is quite complex (right model choice, data preparation, data splitting, meta data tuning ...).
A good, robost model is independent from splitting, meta data , model type ...
Both training and test set need to be representative for the process, but the test set doesn't need to be as comprehensive. Training / Test is necessary to judge result, as @ih already mentioned.
For splitting, the stratify option can be used to control the split. But for both, modeling and splitting, you need the process know how to do it best.
For complex models, and when result is critical, I tried to test different models, different meta data, and different validation columns. When there is no major difference, there is no issue. And this is quite easy to be done in JMP (Pro).
At the end, only this test gives you the answer, whether there is no leakage. But honestly, I wouldn't mind leakage, when I can be sure to have a good model.
Georg