<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Equivalence test in Discussions</title>
    <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488201#M73159</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/40411"&gt;@CentroidJackal1&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; The way you present and propose is definitely one way to evaluate the data. One thing to keep in mind though, is that when you say the difference in the results need to be less than 2%, 2% of which one, method A or method B? This part was not well explained.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Another way to look at the data might be with a mean-difference plot, much like a &lt;A href="https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot" target="_self"&gt;Bland-Altman&lt;/A&gt; analysis. You can read up on it from the link. This is a good way to evaluate if two different methods are essentially equivalent and to see if there's any bias in the data and if it's coming from one method versus another. The BA analysis doesn't require normally distributed data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope this helps!,&lt;/P&gt;&lt;P&gt;DS&lt;/P&gt;</description>
    <pubDate>Wed, 18 May 2022 17:22:10 GMT</pubDate>
    <dc:creator>SDF1</dc:creator>
    <dc:date>2022-05-18T17:22:10Z</dc:date>
    <item>
      <title>Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488085#M73149</link>
      <description>&lt;P&gt;Hello&lt;/P&gt;&lt;P&gt;I want to compare two analytical methods that measure the purity of my samples and prove they are equivalent. I know they don't provide the exact same results which means the mean of their difference is not 0. However, I only need to prove that the difference between their results is less than 2%. My data are based on 28 different samples and each sample was analyzed once by each method. We tested the equivalence with a two-sided t-test on the purity difference but we are not sure this test is the most appropriate to prove the concept. Please not that the data is not normally distributed. Can someone please advise on this?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 10 Jun 2023 23:48:30 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488085#M73149</guid>
      <dc:creator>CentroidJackal1</dc:creator>
      <dc:date>2023-06-10T23:48:30Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488201#M73159</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/40411"&gt;@CentroidJackal1&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; The way you present and propose is definitely one way to evaluate the data. One thing to keep in mind though, is that when you say the difference in the results need to be less than 2%, 2% of which one, method A or method B? This part was not well explained.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Another way to look at the data might be with a mean-difference plot, much like a &lt;A href="https://en.wikipedia.org/wiki/Bland%E2%80%93Altman_plot" target="_self"&gt;Bland-Altman&lt;/A&gt; analysis. You can read up on it from the link. This is a good way to evaluate if two different methods are essentially equivalent and to see if there's any bias in the data and if it's coming from one method versus another. The BA analysis doesn't require normally distributed data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope this helps!,&lt;/P&gt;&lt;P&gt;DS&lt;/P&gt;</description>
      <pubDate>Wed, 18 May 2022 17:22:10 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488201#M73159</guid>
      <dc:creator>SDF1</dc:creator>
      <dc:date>2022-05-18T17:22:10Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488202#M73160</link>
      <description>&lt;P&gt;There are numerous long-standing standards (e.g., NCCLS, IUPAC, ICH) about method comparison. It can involve a number of aspects of the measurement: accuracy, precision, linearity, limit of detection or quantitation, and so on. You can using Fit Orthogonal command in Bivariate to perform a Deming regression that minimizes the error in X and Y. JMP 17 will also include Passing-Bablok regression and Bland-Altman analysis. In the meantime, see the JMP Help chapter about &lt;A href="https://www.jmp.com/support/help/en/16.2/#page/jmp/measurement-systems-analysis.shtml#" target="_self"&gt;Measurement System Analysis&lt;/A&gt; for more ideas.&lt;/P&gt;</description>
      <pubDate>Wed, 18 May 2022 17:31:30 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488202#M73160</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-05-18T17:31:30Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488327#M73181</link>
      <description>&lt;P&gt;&amp;nbsp;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/12549"&gt;@SDF1&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you so much for the quick reply. Apologies for the confusion. The purity is expressed as percentage so that's why I put 2% but the difference is the absolute value and I want to prove that there is a certain confidence to say the difference will "always" (for 90%, 95% or 99% of the population) be less than 2. I am not sure the BA analysis will prove that point as it seems like it will only prove that there is a bias between the two methods.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Many thanks,&lt;/P&gt;&lt;P&gt;Jennifer.&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2022 08:51:48 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488327#M73181</guid>
      <dc:creator>CentroidJackal1</dc:creator>
      <dc:date>2022-05-19T08:51:48Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488328#M73182</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your reply. The fit orthogonal is saying that the methods are not equivalent just like the fit line. We know the methods don't give the same results exactly and that there is a bias but we want to prove this bias is not significant. Does that make more sense?&lt;/P&gt;&lt;P&gt;Many thanks,&lt;/P&gt;&lt;P&gt;Jennifer.&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2022 09:08:28 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488328#M73182</guid>
      <dc:creator>CentroidJackal1</dc:creator>
      <dc:date>2022-05-19T09:08:28Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488466#M73198</link>
      <description>&lt;P&gt;If the models are not equivalent, then the bias is significant.&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2022 13:02:59 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488466#M73198</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-05-19T13:02:59Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488467#M73199</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/40411"&gt;@CentroidJackal1&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; I believe your equivalence test shows that you can't reject the equivalency of the two methods -- to within +/-2 units of 0. In the example that you provide, the average difference is -0.72, and since you're accepting anything within +/-2 units of 0, then that the average difference suggests that the values are equivalent. The red whiskers on the average difference are the confidence bounds on that mean difference.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; Something to keep in mind is that this equivalence test is looking at the mean difference of all your 28 measurements. It says nothing about a single measurement. So, I think one should be careful about saying the difference will "always" be less than 2 (at whatever confidence you've decided ahead of time). What really should be discussed is that the&amp;nbsp;&lt;EM&gt;average&lt;/EM&gt; difference is less than +/-2 units from 0. This is illustrated by your distribution where you have at least one data point with a difference of -4. In this particular case, this one (or more) measurement(s) has a difference greater than +/-2 units from 0, and would fail your original description. However, the average of all measurements does not fail the description.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; The BA analysis should give you this information as well, as you can estimate the confidence interval on the mean-difference plot and see how many of the measurements fall outside these lines as well as determine if there is any bias in one method or the other.&amp;nbsp;With the BA analysis, you can get a pretty good estimate of how many data points will be outside your CIs and hence give a more reasonable estimate of what you'd like to answer, which is for any given measurement, you have a confidence level of X that the difference is within +/-2 units.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; You might also consider doing an ANOVA analysis, where you can again do an equivalence test on the means -- but in this case you would use the actual measurement values, and even though you know the absolute values of A and B might not be the same, you can still do the equivalence test to within +/-2 units. The nice thing here is that you can account for unequal variances if need be -- for example if method A and B don't have similar standard deviations.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp; With either of the equivalence tests, you're really looking at either the difference in means or the mean difference -- for all your measurements, and that should be made clear. A single measurement can always be an outlier, but if the mean response of each method is equivalent, it should pass the equivalence test.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Good luck!,&lt;/P&gt;&lt;P&gt;DS&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2022 13:08:12 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488467#M73199</guid>
      <dc:creator>SDF1</dc:creator>
      <dc:date>2022-05-19T13:08:12Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488470#M73200</link>
      <description>&lt;P&gt;I should have replied that your equivalence test demonstrates sufficient evidence to reject the null hypothesis that the methods are different by more than 2% at the 95% confidence level.&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2022 13:17:50 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/488470#M73200</guid>
      <dc:creator>Mark_Bailey</dc:creator>
      <dc:date>2022-05-19T13:17:50Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/489811#M73271</link>
      <description>&lt;P&gt;Thank you very much&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/12549"&gt;@SDF1&lt;/a&gt;. That was really helpful.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Jennifer.&lt;/P&gt;</description>
      <pubDate>Mon, 23 May 2022 06:50:55 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/489811#M73271</guid>
      <dc:creator>CentroidJackal1</dc:creator>
      <dc:date>2022-05-23T06:50:55Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/489812#M73272</link>
      <description>&lt;P&gt;Thank you for clarifying&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/5358"&gt;@Mark_Bailey&lt;/a&gt;. and thank you very much for your help.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Jennifer.&lt;/P&gt;</description>
      <pubDate>Mon, 23 May 2022 06:52:08 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/489812#M73272</guid>
      <dc:creator>CentroidJackal1</dc:creator>
      <dc:date>2022-05-23T06:52:08Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/894325#M105506</link>
      <description>&lt;P data-start="281" data-end="595"&gt;I’m comparing two analytical methods that assess the &lt;A href="https://ricepurity.info" target="_self"&gt;purity&lt;/A&gt; of my samples and would like to determine whether they are &lt;EM data-start="400" data-end="412"&gt;equivalent&lt;/EM&gt;. I’m aware that they don’t yield identical results—so the mean difference between the two methods isn’t zero—but what matters is whether the difference remains within a 2% tolerance.&lt;/P&gt;
&lt;P data-start="597" data-end="614"&gt;Here’s the setup:&lt;/P&gt;
&lt;UL data-start="616" data-end="1003"&gt;
&lt;LI data-start="616" data-end="699"&gt;
&lt;P data-start="618" data-end="699"&gt;I tested &lt;STRONG data-start="627" data-end="651"&gt;28 different samples&lt;/STRONG&gt;, with each sample analyzed once by each method.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI data-start="700" data-end="780"&gt;
&lt;P data-start="702" data-end="780"&gt;I’m interested in verifying if the methods are equivalent within a ±2% margin.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI data-start="781" data-end="946"&gt;
&lt;P data-start="783" data-end="946"&gt;I originally applied a &lt;STRONG data-start="806" data-end="826"&gt;two-sided t-test&lt;/STRONG&gt; on the differences in purity, but I’m not confident that this approach is the best suited for establishing equivalence.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI data-start="947" data-end="1003"&gt;
&lt;P data-start="949" data-end="1003"&gt;The kicker: &lt;STRONG data-start="961" data-end="1002"&gt;the data are not normally distributed&lt;/STRONG&gt;.&lt;/P&gt;
&lt;/LI&gt;
&lt;/UL&gt;
&lt;P data-start="1005" data-end="1044"&gt;I’d greatly appreciate any guidance on:&lt;/P&gt;
&lt;OL data-start="1046" data-end="1302"&gt;
&lt;LI data-start="1046" data-end="1158"&gt;
&lt;P data-start="1049" data-end="1158"&gt;Which statistical tests are most appropriate for assessing equivalence when data aren’t normally distributed.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI data-start="1159" data-end="1220"&gt;
&lt;P data-start="1162" data-end="1220"&gt;Valid ways to implement such tests under these conditions.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI data-start="1221" data-end="1302"&gt;
&lt;P data-start="1224" data-end="1302"&gt;Pitfalls I should be aware of when assessing equivalence with non-normal data.&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;P data-start="1304" data-end="1368"&gt;Any recommendations or advice would be really helpful—thank you!&lt;/P&gt;</description>
      <pubDate>Wed, 13 Aug 2025 23:35:10 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/894325#M105506</guid>
      <dc:creator>AcceptanceFox47</dc:creator>
      <dc:date>2025-08-13T23:35:10Z</dc:date>
    </item>
    <item>
      <title>Re: Equivalence test</title>
      <link>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/894356#M105510</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.jmp.com/t5/user/viewprofilepage/user-id/74293"&gt;@AcceptanceFox47&lt;/a&gt;&amp;nbsp;:&amp;nbsp; If the data are approximately log-normal (the natural log of the data is approximately normal) then this can be carried out as a confidence interval for paired data would be carried out (could do it via Matched Pair platform, for example). But the analysis would be carried out on the natural log of the data.&amp;nbsp; The endpoints of the 90% confidence interval on the difference (in log-scale) would then be anti-logged (EXP(.) ) to get a confidence interval on the geometric mean ratio. This final interval would need to be within 0.98 to 1.02 to claim equivalence.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;And, if the sample size is large enough (n=28 qualifies) then the Central Limit Theorem (&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Central_limit_theorem#:~:text=In%20probability%20theory%2C%20the%20central,themselves%20are%20not%20normally%20distributed" target="_blank" rel="noopener"&gt;https://en.wikipedia.org/wiki/Central_limit_theorem#:~:text=In%20probability%20theory%2C%20the%20central,themselves%20are%20not%20normally%20distributed&lt;/A&gt;.) says not to worry about normality. Edit: so, carry on as I explained above.&lt;/P&gt;</description>
      <pubDate>Thu, 14 Aug 2025 08:58:19 GMT</pubDate>
      <guid>https://community.jmp.com/t5/Discussions/Equivalence-test/m-p/894356#M105510</guid>
      <dc:creator>MRB3855</dc:creator>
      <dc:date>2025-08-14T08:58:19Z</dc:date>
    </item>
  </channel>
</rss>

