Hi @frankderuyck,
Mixture designs are optimization designs, not screening designs. A mixture design is generally run if the number of factors is low and/or the components are known to have an effect on the response(s). So power calculations is not a sensible metric (even if "reliable") I would use to evaluate/compare mixture designs. Try to use predictive metrics like Prediction variance profile, Fraction of Design Space Plot, Prediction Variance Surface, and Design Diagnostics : relative G-Efficiency (related to the maximum prediction variance over experimental space), and average variance of prediction to evaluate and compare mixture designs.
Moreover, as a mixture design involves factors that are linearly dependant (sum = 100% or 1), this situation creates multicollinearity, which inflates error for effect terms estimations, so this is why you would have very low power for the different effects in your model.
See other relevant discussions :
Custom Design: Mixture with Process Variables. How to Evaluate Design?
Should I consider power analysis in DOE?
How to use the effect summary effectively for a mixture DOE?
Hope this answer will help you,
Victor GUILLER
"It is not unusual for a well-designed experiment to analyze itself" (Box, Hunter and Hunter)