Thanks for your input statman! I will address your points as you wrote them:
1) This wouldn't work for the reason you stated. These "missing" values basically mean the response is on the low end of the response distribution but is so low that a numerical value can't be calculated. Using the mean would definitely nullify the effect.
2) Not sure if my manager would agree with using those values to build the model (predictions would be based on previous DOE's that aren't identical), but this is definitely something to keep in mind just in case.
3) This would be easier to convince my manager to use, but still wouldn't be the preferred approach since we are adding hypothetical data to the model.
4) I had been using this strategy when I added zeroes to the missing data or otherwise handling it differently to see the effect on the model. This is usually how I defend or dismiss an approach in our group meetings. (If both approaches generally agree, then we choose what we think is the best approach and move forward...if not, we look closer at the data to try to understand why both approaches to the data don't agree).
5) This measurement system is pretty robust, so I'm not too worried about this point. Most of the analysis is automated and we are using pre-labeled standards straight from the vendor on a well-maintained chromatographic system that we routinely use to measure accuracy, precision, linearity, etc for multiple types of assays and multiple molecules for each assay without issue.
6) This was the solution we decided on. We found another parameter called "Start p/v" that measures the ratio of the peak to the valley at the beginning of a peak. We found that this response is mostly linear to the resolution and gives values when the resolution cannot calculate a value. This gives a value from the DOE that we can use to create the model that won't artificially drive the data towards 0 had we entered that into the missing values.
Editorial comment: We started with response surface because we did have a lot of data about most of the factors using a different fluorescent tag and didn't expect too much difference. The particular peak pair we had trouble with didn't exhibit that issue in those previous DOEs, but we did change one of our factors in the DOE we had no data on. Our other 3 responses worked well, but this one tripped us up. In hindsight, we should have probably screened this new factor before going into the response surface DOE but I think we will be able to make it work using the alternate Y response.