Regarding your analysis of the DSD you performed, I can't just use those plots to conclude anything. There are a number of analysis outputs that need to be evaluated that are not provided. For example, residuals analysis. It looks like you may have some unusual data points in your experiment. Also, how did you go from >10 to 6? Where did you set the factors you excluded? What is your inference space and what was the noise that changed during the experiment and what noise was held constant during the experiment? Was your measurement system evaluated before or during the experiment?
There is no way to provide advice that works in all situations. What is required is you perform situation diagnostics based on the generally accepted criteria for design selection. Part of this set of criteria is predicting the rank order of model effects up to 2nd order. Based on your predictions, choose appropriate resolution (for 2nd order factorial) and polynomial (quadratic).
Advice, in general, is you build models hierarchically. Following Taylor series, start with 1st order and add order through iteration. Much depends on whether the optimum is truly inside the design space or outside the design space. If the optimum is truly outside the design space, it is preferred to explore the linear relationships first so as to find factors that can move you through the response surface space fastest (fastest between 2 points is a straight line). Once you are near the optimum area, then augment the design space. Complex relationships may exist throughout the space, but are not useful until you get near the optimum. The more complex the model, the less universally useful.
My thoughts on global optimum vs. local maxima...you want your global to be plateau-ish (that is robust). Locals tend to be peeked/pointed and not robust. I suggest you read Box's discussions on sequential experimentation to get an understanding of his approach to handle model building and complex relationships.
A good model is an approximation, preferably easy to use, that captures the essential features of the studied phenomenon and produces procedures that are robust to likely deviations from assumptions
G.E.P. Box
BTW, Bill Diamond (Practical Experiment Designs) also suggests reducing the spacing between levels to counter possible non-linear relationships. However, I don't think this is for screening designs. IMHO, the biggest concern when just beginning the investigation is the Beta error. That is, dropping potentially useful factors because there is not evidence to suggest the factor has an effect. Setting factor levels bold, but reasonable, is good advice to minimize this error. In addition, since you hope to mitigate bias in selecting factors for further study, set all factors in the investigation bold.
"All models are wrong, some are useful" G.E.P. Box