Here are my thoughts:
1. The selection as to what experiments to run is, IMHO, a personal choice. You must understand the situation (context), what questions you are trying to answer and weigh the information you could get from the experimental strategy against the resources required. In some cases, I would be willing to sacrifice efficiency for effectiveness.
2. Convincing engineers, scientists and managers to run ANY experiment can be the biggest battle. It can be more challenging if they think of the design matrix as a "block box" and they don't understand how it works. It is incredible how many folks are impacted by the OFAT mentality. It is, after all, so fundamentally intuitive... Classical orthogonal designs are easy to explain and provide comfort for de-aliasing if necessary, particularly for those with no statistical background.
3. The focus of design selection is biased to the factor effects. There is not enough emphasis on strategies to handle noise. Short-term noise (e.g., measurement error, within sample, between sample) handled via repetition or nesting. Long-term noise (e.g., ambient conditions, lot-to-lot raw materials, operator technique) handled via RCBB, BIB and split-plots. I use cross-product arrays in robust design situations and I have yet to find an easy way to set these up using the custom design platform.
4. "The purpose of the first experiment is to design a better experiment". We experiment to gain insight into and validate/invalidate our understanding of causal structure. The likelihood we have all of the important factors, tested at optimal levels in our first experiment is questionable. Iteration is how we learn. The first experiment is merely the start of the journey.
"All models are wrong, some are useful" G.E.P. Box