Finding the best use of time is a challenge. You often need to understand the effect of time in industrial experimentation. And how the effect of time changes with other factors.
In a designed experiment you set your input factors at the start of each run. It is then relatively cheap to sample and measure your response at several time points during the run. You might have automated logging, in which case you will have a lot more data on how your response varies through time.
It would seem that it should be easy to use this data to build a model that tells you what you need to know. All you want is a profiler plot for your response, with time as one of the “X”s along the bottom. However, time is not like the other Xs because you didn’t set it at the start of each run. This seemingly subtle difference raises many questions.
Can you just add time as an effect, as you would for your other DoE factors? Should you be worried about violating the assumptions of ordinary least squares regression? Is it safer just to create a separate model for each time point?