I am just going to make some philosophical comments to add to the discussion or you can completely ignore me.
The idea behind response surface design is to map the response as a function of important factors (model terms). Three things the experimenter needs to keep in mind:
1. The "true" surface already exists and we can't change it.
2. Surfaces are "n" dimensional, although there is strong empirical evidence to suggest only a subset of the surface is need to be useful (scarcity of effects principle).
3. It may not be useful or practical to map a surface using design factors when the conditions under which the experiment was run (i.e., inference space as a function of noise) can't be repeated.
There are many different approaches to mapping the surface, but in all cases, iteration is key to be efficient and effective in your investigation. I am an advocate of sequential experimentation. For example (overly simplified): One may start with directed sampling to determine dominant components or sources of variation and assess measurement errors. Experiment on the dominant components to identify significant factors and the effects of noise. Move and/or augment the space defined by the significant factors and find where those factors are robust to noise.
From a geometric and statistical analysis standpoint, the more balanced the design, the easier the analysis and hopefully the less likely biased (level setting may bias the study). Of course as computing power has increased, the need for balance has become less necessary. However, when you are mapping the surface, you are looking to sample the surface such that you mitigate your own biases as to where to focus. Ironically, when you are at the point of being near the optimum and now you are fine tuning the factor levels in this space, statistical significance is no longer as important (it has already been established). You are simply trying to test different locations on the surface to find best response values.
A good model is an approximation, preferably easy to use, that captures the essential features of the studied phenomenon and produces procedures that are robust to likely deviations from assumptions. G.E.P. Box
“The experimenter is like the person attempting to map the depth of the sea by making soundings at a limited number of places” Box, Hunter & Hunter (p. 299)
"All models are wrong, some are useful" G.E.P. Box