Hi @Aziza,
@P_Bartell suggests to run in parallel the two designs (original design and the design with extra points), to verify and calculate that the parameter estimates found in the original model are similar enough to the ones from the design with extra points, and/or that you don't miss any other terms/effects, and/or that you don't have a lack-of-fit with possible extra degree of freedom added with the new points.
Note that depending on the new values added (and the noise), it may change the terms in the model. You may need to :
- add new terms in your second model (if there was not enough points to estimate/detect them previously in the DoE or they were not significant enough),
- remove existing terms in your second model (because new points show an inverse/opposite behaviour compared to previous experiments done in the DOE, so effects are not significant anymore),
- simply keep the same terms as before, with a possible increase in parameter estimation accuracy (depending on the experimental variance).
One "other" option (to do perhaps before the models comparison) could also be to use the newly added points (not from the original design) as validation points (in the tradition of validation set in Machine Learning). This way, you check that the model created based on the DoE is accurate enough and may generalize to new experiments done by visualizing residuals and comparing your performance metric(s) (R², R² adjusted, RMSE, ... depending on your objective) between points in the model (training set) and newly added points (validation set).
Depending on the differences you find between results from the model on training and validation sets, you may have to explore the models more precisely and go to models comparison method, to discover and understand what are the differences and consequences of adding the new points.
Hope this helps you
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)