Here are my thoughts:
First a clarification, when you are running factorial designs, the interactions are present, they just may be aliased and therefore not specifically assignable. Your concerns are legitimate. However, if you commit to sequential experimentation, errors you make in interpretation of initial experiments will be uncovered in the next iteration.
There is no one correct approach to experimentation. Given the situation you pose I have the following comments:
1. 4 factors is not much of a screening design. You can run 4 factors in resIV in 8 treatments. I think screening designs have ≥5 factors in which case you start realizing the benefits of fractionating and sequential design.
2. I always start with SME (domain knowledge) as a basis for determining resolution. I suggest predicting the rank order of 1st and 2nd order effects. Are the 2nd order effects possible/probable? As 2nd order effects rise in your rank ordering, consider higher resolution designs.
3. Consider the design space. Higher order effects (e.g., interactions and curvature) occur inside the space. So the first objective is to move your space to "near" optimum (of course, no-one knows where optimum is...we rely on SME to provide guidance). Then augment the space. If you are far from optimum, it may not be efficient to study inside the space. We all know the fastest way between two point is a straight line, so use factors at 2 levels to move the space. This is the idea behind screening. Experiment on lots of factors hoping to find factors that move the space quickly and focus on the significant factors.
4. Don't ignore noise. In just about every case, the number of factors you manipulate is always a small subset of all factors. What do you do with the others? The correct answer is NOT to hold them constant. What strategy should you use to partition and assign the noise (e.g., repeats, replicates, blocks, split-plots, nesting)? This is IMHO the most challenging part of experimentation and the least taught. Unfortunately, the software does not give guidance.
In all cases, design multiple experiments. Compare and contrast. What will each do in terms of assigning, confounding and restricting factor effects? Weigh this against resources and pick one.
"The best design you'll ever design is the design you design after you run it"
"All models are wrong, some are useful" G.E.P. Box