Hi @AutoSetMarmoset,
Mixture designs are optimization strategies, where the emphasis of this type of design is more on predictivity rather than on screening and statistical significance filtering.
It makes more sense in the analysis to start from the full assumed model with the terms you have entered in the design creation, and start removing terms in the model (except main effects), based on the predictive performance of the model (RMSE for example), NOT based on individual p-values/logworth of each term : there are multicollinearity/correlations among mixture factors, no intercept in this type of model, so p-values/logworth are not a valid metric for model selection.
Note there are many ways to evaluate model "performances"/adequacy, depending on your objective(s) and the metrics you used. Complementary model's estimation and evaluation metrics, like log-likelihood, information criteria (AICc, BIC) or model's explanative power (through R2 and R2 adjusted), model's predictive power (through MSE/RMSE/RASE), offer different perspective and may highlight different models.
You can then select based on domain expertise and metrics evaluation which one(s) is/are the most appropriate/relevant for your topic, and choose to estimate individual predictions with the different models (to see how/where they differ), and/or to use a combined model to average out the prediction errors.
Here are some relevant posts in the forum dealing with model selection for (mixture) designs :
Backward regression for Mixture DOE analysis with regular (non pro) JMP?
Analysis of a Mixture DOE with stepwise regression
removing terms from a model following a designed experment
If you need a more detailed advice or guidance, could you share an anonimized version of your DoE ?
I hope this answer will help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)