First, welcome to the community. Realize, design evaluation tools are based on certain assumptions (many of which are unknown before the experiment is run). There are a number of factors that affect design selection. All of these are dependent on your predictions. See:
https://www.jmp.com/support/help/en/17.0/?os=mac&source=application#page/jmp/overview-of-the-evaluat...
For your situation, you chose a design and ran it, correct? How did you go about doing the analysis? If I interpret your analysis correctly, it sounds like you did not create any practically significant variation in the experiment? Why not? There are 3 reasons for this:
1. Levels were not bold enough
2. Measurement systems were not adequate
3. The factors don't matter
(BTW, this is different from the case where you created practically significant variation but it is not assignable to the factors you manipulated).
What DF's were used for estimate of the random errors (MSE)? Was there any replication? When you say the quadratic terms are statistically significant, based on what comparison? Is it possible to share your data table?
In general, if you remove significant items from the model, their variance is pooled in the error term. This will inflate the error term and make other factors less significant.
Not sure what you mean by "What other metrics should be taken into account?". What is the goal of the experiment? Are you trying to understand causal structure or "pick a winner"? Are you building or refining a model? There are a number of statistics that can be useful to help build your model.
"All models are wrong, some are useful" G.E.P. Box