One question for you...hopefully your randomly selected the observations to include in your 'training' set? If not, I'd backtrack and start with that. Assuming you did...
Well, yes, if you want to evaluate models using a training/validate/test construct, which is generally a best practice for predictive modeling work,,,then JMP Pro is the way to go. Lots of very useful capabilities like the Model Comparison and Formula Depot platforms that are aimed specifically at very efficient multiple model evaluation.
But, lacking JMP Pro, here's two simple ideas for you to consider. Save the model you've built with your training data set to a new column formula calling it 'Predicted Values' or some such construct, in a data table containing your 'test' data set. You should now have predictions of the response in the column. Now plot a Fit Y by X plot of the 'Predicted Values' and the actual values for the responses and examine for 'goodness of fit' of the two sets of responses. I'd force equal x and y axis limits on the chart to all but replicate the Fit Model platform report actual vs. predicted plot. Use your eye to find that 45 degree line...or just overlay one in Graph Builder.
Another variation would be to subtract the Predicted from the Actual values and plot the residuals in a histogram format.
These techniques won't do anything to evaluate over fitting...but if your emphasis is strictly prediction...well it's a start.