Hi @YanivD,
It may be hard to find a definitive answer to your question, without knowing your goal, factors, supposed model, experimental budget, etc...
Both options could have an interest depending on your goal study:
- Option A with one batch per run : This option is interesting if you want to have a realistic assessment of the total variability of your inputs on your process. Since you'll "create" one batch per run, your batch-to-batch variability could be estimated quite precisely, perhaps at the expense of a larger design, but it may be cofounded with other factors...
- Option B with one big batch per 5 runs : This option is interesting if you are not specifically interested in the variability of your inputs, but still want to take into account a part of it. You may be able with this option to differentiate process variation from batch variation (thanks to blocking factor), which might not be possible with Option A (since you use one batch per run).
If you're interested in evaluating your process (and not your "product" input, since you can't check QC of it anyway), option B sounds good to me.
I would be interested to know what other members of this community think about this use case.
Victor GUILLER
L'Oréal Data & Analytics
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)