Here are my thoughts:
1. I wanted initially distribute samples from each run (treatment) over printing bed, also including extreme location like corners. Now I see that would introduce addtional error to the model, since bed leveling is surely not perfect. If I want to consider only printing parameters like nozzle and bed temperature, horizontal expansion, etc. I should select only one location on the bed, for example in the middle, is that correct? I am interessted only in geometry measurements.
If you "randomly" distrusted the treatments over different locations of the bed, you do indeed confound within bed variation with treatment variation. This will decrease the precision of the experiment. However, holding the location constant is also NOT a good idea as the results of your experiment are limited to that location. This is an inference space issue. So you have options:
1. Confound the location with block and run a RCBD or BIB (replicate strategy).
2. Collect data from different locations while using the same treatment combinations (repeat strategy). With the data, you do two things (of course plot the within treatment data to look for outliers, et. al.)
- Average the data which will reduce the within bed variation and increase the precision, AND
- Calculate the variance of the data and use this as an additional response variable (do the average and variance for each Y). You will model both the mean and the variance to determine whether factors affect the mean of the dimensions or the variance of the dimensions. Recognize, the within bed may also be confounded with within part and measurement components of variation.
2. In case of only one location on the bed, would it be then enough to print only one sample (data point) for each run or several samples printed very close to each other and then average measurements (there might be always some differences in material flow or quality, air flow, etc)?
This is not a statistical question (and a statistician can't answer it). If you are concerned with variables not explicitly being varied in the experiment (e.g., noise), you need strategies to handle the noise. Holding the noise constant is the wrong strategy!
The exact standardization of experimental conditions, which is often thoughtlessly advocated as a panacea, always carries with it the real disadvantage that a highly standardized experiment supplies direct information only in respect to the narrow range of conditions achieved by the standardization. Standardization, therefore, weakens rather than strengthens our ground for inferring a like result, when, as is invariably the case in practice, these conditions are somewhat varied.
R. A. Fisher (1935), Design of Experiments (p.99-100)
So the question is how do you run an experiment that is representative of future conditions without reducing the precision of the experiment so much the experiment provides no useful information?
“Unfortunately, future experiments (future trials, tomorrow’s production) will be affected by environmental conditions (temperature, materials, people) different from those that affect this experiment…It is only by knowledge of the subject matter, possibly aided by further experiments to cover a wider range of conditions, that one may decide, with a risk of being wrong, whether the environmental conditions of the future will be near enough the same as those of today to permit use of results in hand.”
Dr. W. Edwards Deming
"Block what you can, randomize what you cannot"
Dr. G.E.P. Box
"All models are wrong, some are useful" G.E.P. Box