Good questions. Here are my thoughts:
1. The typical failure mode for level setting is TOO NARROW. One hypothesis is because there is a fear of making bad product (which we have been taught to not do). Remember, you are trying to create variation not pick the winner. Bolder level settings increase the inference space with little resource ramifications. Bold, but reasonable is my advice.
2. The other observation I have running experiments and teaching experimentation for 35 years, there is, perhaps subconscious, a desire to have the "best" level included in the experiment. This is not necessary for sequential, iterative work. I believe this is the effect of management wanting an answer quick rather than understanding the problem and causal structure. Say the word iteration to management and they think time and money...
3. Advice I give is to PREDICT ALL possible outcomes of the experiment (i.e., What will you do if there is no practical change in the response variables? What will you do if there is a significant change in the response variables, but none of the factors are significant? What will you do if there is special cause variation during the experiment? What will you do if factor A is significant, What if it is not?, etc.). In addition, I suggest predicting the value for the response variable for each treatment. One of the benefits of this prediction is you can think about run order. Let's say you are uncertain about level setting. You run the treatments predicted to have the best value and the treatment predicted to have the worst value as the first 2 runs of the experiment. If there is no variation created, you re-think the level setting of the factors. If there is a significant amount of variation created, then proceed with the experiment.
4. I once asked Dr. Taguchi this very question many years ago. He said first that the treatment combination that produced no result may be the most informative treatment in the experiment. He also spent a good deal of time trying to understand what actually happened when the process failed, and created a response variable that quantified that phenomena. In fact, I think Taguchi's thinking about response variables is one of his most significant contributions to the field.
5. Replicating just that treatment combination leads to the issue of confounding block effect with the replicated treatment, so be cautious with this approach.
6. If you lose only one treatment, you can regress on the remaining data to estimate what that result may have been and use that value to salvage the rest of the experiment. (Or use the mean of the remaining data as the substitute value which has the effect of removing that treatment effect). If you use a substitution concept, my advice is to do multiple substitutions and see to what extent they agree. Ifg the agreement is sufficient, then go with it. If not, then you have to think about additional runs.
"All models are wrong, some are useful" G.E.P. Box