Frank, I agree, a sequential approach is always better.
IMHO, there will always be those that look to optimize the design structure for a given situation (e.g., optimality criteria, et. al.), BUT the most important aspect of experimentation is how you handle the noise (e.g., short-term noise like measurement error, within batch, within part and long-term noise like ambient conditions, lot-to-lot raw materials, human technique). If the noise is constant during the experiment, then you have a narrow inference space where results will likely not extrapolate. If the noise "randomly" varies during the experiment, then you compromise precision. The strategies to handle noise are not well described by the software (any software). There is always too much focus on the design structure (and how you can economize runs to learn more).
“Unfortunately, future experiments (future trials, tomorrow’s production) will be affected by environmental conditions (temperature, materials, people) different from those that affect this experiment…It is only by knowledge of the subject matter, possibly aided by further experiments (italics added) to cover a wider range of conditions, that one may decide, with a risk of being wrong, whether the environmental conditions of the future will be near enough the same as those of today to permit use of results in hand.”
Dr. Deming
"Before you make general rule of this case, test it two or three times and observe whether the tests produce the same effects"
Leonardo da Vinci
"BLOCK WHAT YOU CAN, RANDOMIZE WHAT YOU CANNOT"
G.E.P. Box
"All models are wrong, some are useful" G.E.P. Box