I will share my thoughts though you may not agree with them. Replicated designs are only one way to include noise in the experiment. Un-replicated designs can be quite useful. I use Factor Relationship Diagrams to graphically display the relationships between the design factors and the noise factors in an experiment. This way you understand what factors make up the basis for comparison and you can use scientific/engineering judgement to determine if that noise is representative of future conditions and it might stimulate thoughts and hypotheses development about the potential effect of such noise. Partitioning the noise can be quite useful for increasing the designs precision while not negatively effecting inference space.
Your statement "Now the goal of the study is to do experiments in the lab (on small scales mimicking large scale operation as much as possible), systematically varying factors A, B and C while keeping all other noise factors constant." doesn't make any sense to me. First, the goal should be IMHO, to understand the causal structure affecting the chemical purity! Second, noise, by definition is the set of factors you ARE NOT WILLING/ABLE TO CONTROL. The way you are collecting data is to use experimentation (rather than directed sampling where no manipulation is done and you rely on partitioning the sources via how you sample and rationally subgroup the data). Holding noise constant for the entire experiment is a terrible idea unless you intend to hold it constant forever (and pay the added expense of controlling noise), which, of course makes those factors controllable, not noise.
As you probably know, Fisher discovered this was a bad idea as the resultant inference space was so narrow as to be useless when faced with the reality that noise varies and may impact the results (see Fishers papers on his agricultural experiments). He introduced the technique of blocking to handle the noise so the noise is held constant within the block and purposefully varied between the block so you could both expand the inference space and increase the precision of the design simultaneously. Also allowing for estimation of block-by-factor interactions, a measure of robustness.
"Block what you can, randomize what you cannot" G.E.P. Box
Shewhart suggested in the Shewhart Cycle to "carry out the study, preferably on a small scale", I believe what is intended is you want to simulate the real world conditions in the small scale study. I think the way you do this is to exaggerate the effects of noise so as to capture the long-term variation of the noise in a very short time period (the experiment).
That being said, to answer your question, I use a pseudo Bayesian philosophy. I have the scientists/engineers predict the data for each treatment of the experiment (á priori, of course and biased by the engineers hypotheses), predict ALL possible outcomes (to mitigate the bias) and predict what actions you would take for ALL possible outcomes. This gives context to the experimental analysis and provides a practical approach to evaluating the data. For example, if you get a result from a treatment that is wildly different than what was expected, can that data be explained (hypothesis or a modified hypothesis), if not, perhaps question the execution of that treatment or question what else was going on during the running of that treatment (and if you predict why data would not match, this increases the chance of your finding the random effect).
Paraphrased from the quote of Louis Pasteur "Chance favors the prepared mind".
"All models are wrong, some are useful" G.E.P. Box