Hi @ar2,
In your situation, I would simply run all these scenario using Design Explorer. You can then obtain a table with most of the design performance diagnostics, for different run sizes, replicate runs, centre points, optimality criterion, etc... It won't answer everything, but it's a good start, based on metrics and values.
You can see an example on how to compare different designs with the example I provided for DSDs here : Blocks and Center points for a definitive screening design
The centre points are commonly used for three main purposes besides some design performance metrics :
- Estimate pure error for the lack-of-fit test, to evaluate if you model seems adequate.
- Decrease variance prediction in the centre of the experimental space.
- Detect curvature in the response(s). But this curvature effect can't be linked to a specific factor quadratic effect.
So the choice of adding centre points (or adding quadratic effects in the model with estimability "Necessary" or "If Possible" if you have a low experimental budget) is mostly guided by the prior knowledge you have about your system, and any quadratic/curvature effects you may have for some of your factors.
The only situation you wouldn't cover with this platform is the augmentation of your design using Space Filling Augmentation. But in this last case, it's mostly a question of coverage of the design space than improving model-based design metrics.
Hope this answer can help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)