Hey @Michael_Mart!
In general, extrapolation in DOE can be dangerous primarily because of the following reasons:
- Your uncertainty grows very fast once you leave the design region.
- You are assuming the model still holds beyond the design region (i.e. the underlying process does not change significantly outside the design region).
That second reason is the main one for why extrapolation is usually discouraged. It's very easy for the underlying process to change just outside the region of your experiment. My background is in accelerated life testing, which is built on the idea of extrapolating outside of the design region and so is the exception to the rule. Even so, the assumption that the model still holds is ever present, to the point that we will adjust our designs to protect against it.
Of course, one way you can "test the waters" is to collect data outside the design region, record the responses, and then use that data to test your model. That sounds very similar to your situation so, if it's true, you're in a great position! Rather than using the model to validate those points though, you'll be using those points to validate your model, which is always the more appropriate way to look at it.
A common way you can test your model is to compute a quantity called the Mean Squared Predictive Error (MSPE), which is basically taking the sum of the squared differences between the observations you saw and what the model predicts at those same points and then taking the average. If the MSPE is relatively small (maybe not much larger than the Monte Carlo simulation variation), that could indicate your model seems to be performing well at those points. In that case, you might be justified in extrapolating to those points.