turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- JMP User Community
- :
- Discussions
- :
- TOST Acceptance criteria and Sample Size

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 1:21 AM
(598 views)

Hello,

I'm analyzing historical data and need to find meaningful equivalence acceptance criteria between groups and calculate sample size for a new experiments. How can I set Practically equivalence acceptance criteria and I'm using the DOE Sample Size and Power calculator K means. Which prospective means should I enter.

Thank You

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 5:49 AM
(1159 views)

Solution

Of course you can set more stringent criteria for this test. This aspect of the TOST is not a statistical matter, though, unless you mean by "more stringent" that you require greater significance (lower alpha level) in the test.

The distribution of the historical data refers to individual outcomes. The TOST is a test of the mean of the population. The historical data could be used to estimate the mean and the standard deviation.

Use **DOE** > **Design Diagnostics** > **Sample Size and Power** > **One Sample Mean**. It turns out that sample size is the same as for TOST. The difference to detect is the difference between the mean and the criteria limit.

You have specifications so you could use them for your new criteria. You could determine a reasonable margin of safety. For example, in my example above, if I want a 50% margin, then I would set my criteria as y > 9.95 and y < 10.05. You don't need the distribution of historical data to set the criteria.

Learn it once, use it forever!

5 REPLIES

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 5:08 AM
(590 views)

The practically equivalent acceptance criteria are not statistical confidence bounds or limits. Like specifications, they are based on unacceptable performance or failure criteria. You might, for example, determine that a particular attribute of the material, part, or device must be within 0.1 of 10 or else it does not perform as claimed. The practically equivalent acceptance criteria are y > 9.9 and y < 10.1 in this case. So the answer to your question is that they come from specifications.

The equivalence test that you mention (TOST) is a pair of hypothesis tests where the acceptance criteria define the null. hypothesis:

H0: mean < 9.9 or mean > 10.1

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 5:25 AM
(585 views)

Thank You,

I have historical values and all are within specification, but I would like to set equivalency acceptance criteria more stringent than specifications, to demonstrate for examples that two lots are practically equivalent. Can I set stringent equivalency acceptance criteria setting a rule from the distribution of historical data?

Also I need to set Sample size for the test from the Sample size calculator (Could I use Sample size calculator for a t-test and which prospective means do I enter in the calculator)

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 5:49 AM
(1160 views)

Of course you can set more stringent criteria for this test. This aspect of the TOST is not a statistical matter, though, unless you mean by "more stringent" that you require greater significance (lower alpha level) in the test.

The distribution of the historical data refers to individual outcomes. The TOST is a test of the mean of the population. The historical data could be used to estimate the mean and the standard deviation.

Use **DOE** > **Design Diagnostics** > **Sample Size and Power** > **One Sample Mean**. It turns out that sample size is the same as for TOST. The difference to detect is the difference between the mean and the criteria limit.

You have specifications so you could use them for your new criteria. You could determine a reasonable margin of safety. For example, in my example above, if I want a 50% margin, then I would set my criteria as y > 9.95 and y < 10.05. You don't need the distribution of historical data to set the criteria.

Learn it once, use it forever!

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 6:00 AM
(578 views)

Ok many thanks,

**DOE** > **Design Diagnostics** > **Sample Size and Power** > **One Sample Mean **it also aks me for Std dev, is this from historical data?

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Apr 19, 2017 10:29 AM
(550 views)