There are a number of considerations when choosing a design. No one knows which will actually perform better until after you run the experiment. My advice is to create multiple designs, list pros and cons of each, what could be learned from each (what is separated, what is confounded and what is restricted) and weigh the possible knowledge gained against the resources required.
One thing I don't see being addressed is noise. There are a couple of questions:
1. How confident are you in the measurement system? Repeated runs may help assess this in the context of the experiment.
2. What are your strategies to maximize the likelihood the experiment represents future conditions? For example, the chemicals used in the chemical reaction are likely from continuous processes that are distributed in batches. How much variation is there within batch and between batch? Do ambient conditions have an effect? Are you concerned with setup of the process?
Consider blocks or split-plots to manage the precision of the experiment while not compromising the inference space.
3. I suggest creating a predicted rank order of model effects up to 2nd order factorial and 2nd order polynomial. This will help in deciding the resolution you need and whether you consider the departure from a linear model to be significant and therefore need to add levels to estimate this.
4. Lastly, predict every possible outcome and anticipate what you will do with the information gained from each outcome. For example, if you run the experiment and create a practically significant amount of variation, but none of it is assignable to the factor effects, what will you do? If factor A is significant and the + level is better, what will you do? If factor A is significant and - level is better, what will you do? If factor A is insignificant, what will you do? etc.
"All models are wrong, some are useful" G.E.P. Box