Maybe it's important to verify what everyone means by "validate".
Generally, in modeling ad-hoc data, validating the model using a holdout data set is a really good idea. The fit statistics of the holdout data are used to assess the quality of the model from the training data. In modeling ad-hoc data, we are trying to build estimates about the underlying population, working with small data sets undermines the ability to train and evaluate the model. Maximizing the number of observations about the system we are modeling is important for better estimates.
The data for a DOE is very carefully planned and collected before the analysis. This design supports the inclusion/use of model terms that are specified a priori in contrast to modeling happenstance data where we may not know which terms can be included in the model until they are evaluated.
In the case of a DOE none of the data is held out for model validation because all of the data is necessary for estimating the model coefficients by design. A key goal of the DOE is to generate the least amount of data necessary to model physical phenomena. The term "validation" in the case of a DOE implies that additional runs will be completed after the experiment is complete. The "validation" runs will be compared to the predicted values to assess the quality of the DOE model's predictions. Often the number of validation runs is small, maybe only 1-3. These runs provide evidence to support changes to a physical system that will then generate many runs/observations at the same input settings. The model will be used to tune the physical system/process in the future.
DOE World uses similar methods as Modeling World but the approach is very different.
JMP Systems Engineer, Health and Life Sciences (Pharma)