Hi @Here4DOE,
Welcome in the Community !
To receive more feedback and responses for your questions, I would recommend reading Getting correct answers to correct questions quickly.
Could you provide some context and more info ? Perhaps a screenshot of the situation, details about how you use the platform, how you set up the validation column, or even better, a sample dataset we could use to reproduce the situation ?
I tried to reproduce your situation using the JMP dataset "Fermentation Process" and I did the analysis twice : a first time with all rows training + validation, a second time with all rows training but some validation rows excluded. For the validation rows in common between the two sets, the FPCs are the same :

This is the expected situation, as validation rows for the Functional Data Explorer are like test set, you can calculate FPCs and other informations on these rows based on the training set, but the model fitting is not influenced by these validation rows. So you should expect the same values for validation rows, no matter the number of rows added. See Validation in JMP Modeling.
I think there might be something else that give you these results, maybe a difference in the data splitting (not the same rows for training, are some excluded ?) or data processing/modeling (any pre-processing/scaling/alignment/... ? Same model and parameters used in both situations ? Same number of Shape Functions ? ...).
Hope this answer might help in the meantime,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)