Hi @frankderuyck,
I agree, the documentation about the use of validation set for Functional Data Explorer is a bit confusing about the terms and the use of the sets : Launch the Functional Data Explorer Platform (jmp.com)
There are only two possible sets used in Functional Data Explorer (unlike other Machine Learning algorithms that may use 3 sets: Training, validation and test sets) : Training set (coded as 0 or the smallest value) and Validation Set (coded as 1 or higher values).
The validation set is in fact used here as a Test set (if we use rigorously the right naming convention) : it is not used to fit a functional model, but you can extract the FPCA scores of this set to see how effective is the model fit on new functional data : Solved: Functional Data Analysis and Classification: How to calculate FPC of new data (u... - JMP Us...
The Training set is used to fit and evaluate the model's fit, and all plots in the report you mention are done on the training set.
You can evaluate the fitting of the model on validation data by extracting the prediction formula on new data table (red triangle next to "[Model] on [Initial data]" and "Save Data"). Formula columns are added (Prediction and Residuals) and using Graph Builder, you can visualize the Actual vs. Predicted of your validation data, or visualizing the residuals to reproduce the results seen in the report :
But I agree with you, having the results on the validation data directly on the platform would be a lot easier. This might be a good idea to add in the JMP Wish List if not already written.
I hope this answer will help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)