I am currently evaluating a DoE study we have done for a customer. The analysis of the responses is done in our QC-Lab and the data we get back is rounded to one decimal place. The method precisions for the different analyses are at best +-0.1, but the device's results are originally saved with four decimal places. Now, I am having a discussion with my counterpart at the customer whether or not we should use the rounded value we get from QC or all decimal places. In my understanding, we should only use the significant digits dictated by the method error and not all decimal places as it would only add noise to the data without any relevant information. The customer argues, that due to the rounding the variance of the response is underestimated and that it turns the continuous probability mass function into a discrete one. In his opinion, this can lead to overfitting and in some cases create significant effects that are not really in the data.
Do you still have the experimental units? Can you repeat the measurements of them? If so you may be able to reduce the measurement error by averaging the repeated measures.
The first question I would ask the customer is what is practical significance? How much of a change in the response is of practical value? If the practical significance is <0.1, then your measurement system is not adequate. If it is more on the order of >1, then perhaps your measurement system is OK.
If the effective discrimination of the measurement system is to the tenths place, the numbers after that are just random numbers. The problem with measurement system discrimination issues is we don't know if the measurement system is rounding the "true" value up or down or consistently. So errors may be over or under estimated. Whether it matters depends on practical significance.