Hi @frankderuyck,
It's almost impossible to answer your question apriori.
For a model to be "strong enough" it depends on:
- Your objective: The precision you would like to achieve, with a measurable objective like having a RMSE < threshold or MAPE < threshold (%)
- The type of responses you're modeling and its complexity : non-linearity, strong discontinuity, etc...
- The experimental and measurement error/"noisyness" : experimental uncertainty/variance, measurement error/variance, etc...
- The model type you have chosen and its adequacy to the type/complexity of response
- ...
Proceeding sequentially is a good idea in any case, and even if your model is not "good enough", there are a lot of things you can do on your results and model predictions to debug your model and augment your design. Doing some error analysis could help you figure out if some area of your experimental space are systematically incorrectly predicted, or if the errors are "homogeneously" distributed.
This error analysis can then guide your augmentation strategy : augmenting in a Space Filling strategy if the errors are homogeneously distributed and the model is not fitting well on the entire design space, or a more located augmentation if the errors are located in specific area.
Hope this answer may help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)