I have a data set in which 2 response variables were collected (JO2 and dPsi) under 5 different energy states (dGATP) for subjects on two different diets (PM or SR; grouping variable). I used the Fit Model platform to plot the individual subject data and compare whether a linear or quadratic model would be optimal. The AIC was lower for the linear model for both individual and grouped data and my subsequent question is "are the regression lines for the groups different?". I used the equivalence test results under the Linear Fit to see if either the slope or intercept lie outside of the decision limits and both do but when I do the parameter comparisons only the y-intercept for SR appears outside the decision limits.
I am further confused by ANCOVA results (which I thought would be equivalent to the Fit Model - Linear) show both a significant interaction (different slopes) and that the y-intercepts are different indicating that I should reject the null hypothesis that the curves are equal.
1. So is one approach better than the other?
2. Any thoughts on why the two approaches (or at least the parameter estimates where the slopes are both within the decision limits) don't agree with the ANCOVA?
3. Finally, if I have not exhausted all good will, would my approach to testing non-equivalence be different if a non-linear model was the best fit for the data?
Thank you for any advice.
I will try to answer your second question first. The results do not agree because you have different types of tests going on here.
First, I will limit my comments to the Fit Curve with ungrouped data as that will mimic the analysis you used with Fit Model.
The Equivalence Test under the Fit Curve red popup menu will compare the parameter estimate for one group to the other. So, for your scenario it compared the intercept and slope of the SR group to the intercept and slope of the PM group. it does this by using a ratio of the parameter estimates (so you are "combining" the variances of the two estimates by forming a single statistic). The confidence intervals are formed using 95% confidence (the defaults), but the decision limits are for a 25% change. Note that a 25% change is specifying how large of an effect you are looking for. If the parameters are different by 10%, the confidence intervals would cover the shaded area because the shaded area says they need to be at least 25% different.
The Parameter Comparison report under Fit Curve is performing an ANOM (Analysis of Means) type of test. The test is comparing the parameter estimates to the overall mean of all parameter estimates. It is looking to say is there a group (or groups) different from the overall average. For your data, both SR and PM slope estimates are within the decision limits indicating that we cannot claim either parameter estimate as different from the overall mean. It does not matter if it is a 10% difference or a 50% difference, just is it different.
In the ANCOVA approach, looking at the significance of the interaction term, you are directly comparing the slopes to each other. A significant difference (which you have) is stating that the SR slope is different from the PM slope. It is NOT comparing to the overall average. Again, the magnitude of the difference is not relevant here. Even a 10% difference could be flagged as significant.
This is why your results do not necessarily agree. These reports are subtlely different.
Is one approach better than another? No, because they have different interpretations. They are not testing the same things.
If you are testing a nonlinear model, you won't have the ability to use ANCOVA. The Fit Curve approach should work fine for that scenario. Whether you choose the Equivalence Test or the Compare Parameter Estimates option will depend on what you hope to see and learn.
I hope this helps.