Dale, here are my thoughts. I apologize, but mostly philosophic. All scientific investigations are iterative (I guess that's why we call it continuous improvement). Why are we constantly "optimizing" and never seem to "get there"? IMHO, there are 2 primary reasons:
1. We never start with ALL. We always start with some subset of factors, likely the ones we already have some hypotheses about. And what about those we don't feel strongly about, we seldom get data to support these have no effect. It is a typical human bias.
2. The world evolves. New materials and technologies are constantly being developed (and applications for those).
In search of the best manageable (e.g., simple) process, we use constant iteration of induction-deduction. Along this path, we make decisions weighing the effectiveness and the efficiency of the model(s). Ultimately, we are trying to understand causality. We use the methods at our disposal to try and determine which factors should be included in our future studies and what order model is necessary. This includes both practical evaluation as well as statistical evaluation. I am extremely cautious using statistical significance because it is a comparison and if you don't understand what you are comparing, it is meaningless. On the other hand, if you do have some idea of what is being compared (e.g., what creates the design space and what creates the inference space) and, for example, you know those sources of variation are representative of future conditions, then by all means you can feel confident in that analysis. Finally, we recognize we live in a multivariate world (e.g., customers want the product meet multiple criteria) and there will likely be compromises that must be made to achieve the best under the current circumstances.
"All models are wrong, some are useful" G.E.P. Box