There are lots of ways/criteria for choosing checkpoints. They depend on where you are in the data collection cycle, whether they are from a design of experiments (DOE), or are haphazardly collected historical data, or are organized historical data, or are sensor streams of data, and what you want to learn from the checkpoints.
Almost everyone wants to verify they have found optimal operating conditions for a process. But you can’t know that BEFORE you start taking data or running your DOE. So, you might include your best guesses for high performance or low cost performance, or your bosses, or those of some other subject matter expert. You might choose the checkpoints algorithmically as in augmenting a DOE to support the next higher model. Fit the model to all the non-checkpoints and if it doesn’t fit, you already have data for the next higher model which may eliminate your lack-of-fit.
AFTER you have collected your data and analyzed your data (built a model), then you can choose checkpoints at the predicted optimal performance, or in regions of extrapolation to see when the model predictions break down. If you have a manufacturing process, you will probably set up some kind of control chart to test certain conditions over time to see if the process is stable or drifts.
If you have lots of data – more the data mining scenario than the DOE one - you can ask JMP to break the data randomly into training, validation(tuning), and test subsets. Then you fit the model to the Training set, tune the model parameters to prevent over-fitting with the Validation set, and check the prediction accuracy with the test set. The test set are effectively your checkpoints in this case.
I hope that gives you a range of ideas on how to choose checkpoint.