What inspired this wish list request? I am routinely performing equivalence tests for qualifications for new variants of an existing product. I was thrilled to see equivalence testing with multiple comparisons under the Fit Model platform since I there are a few random effects in my model. I only need to see comparisons for Control versus Test Products 1, 2, and 3. I do not need to perform equivalence tests for Test Product 1 vs 2. However, 'all comparisons' appears to be the only option for equivalence testing. What is the improvement you would like to see? Either give the option to perform the Equivalence Tests using Dunnett's or give the option to select the relevant comparisons for the project. I am sure the former is easier to implement but the latter is preferable. In this case, I have two controls because we want to account for the variability in settings under which controls might be manufactured. (Note that I did try the User Defined Comparisons option, but this still does too many comparisons or you have to do one pairwise comparison at a time and adjust the error rate yourself.) Why is this idea important? The application of equivalence testing originated out of pharma where you were often testing a brand name drug in the form of control versus a generic. The testing naturally lends itself to this test versus control paradigm. However, in other situations, teams may want to see all comparisons to better understand equivalence, say, with respect to laboratories or equipment. Restricting to only the comparisons required for the study would give the user more statistical power, create less issues controlling FWER, and create less confusion when using the graphs as a communication tool. These results plots are often used to communicate to regulators and interdisciplinary teams. Tables of results and visualizations of differences that aren't relevant to the project creates churn and derails team meetings. The ability to curate and customize the output is essential.
... View more