Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- JMP User Community
- :
- Discussions
- :
- Estimating the Probability of finding defective product

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Created:
Apr 24, 2018 1:11 PM
| Last Modified: Apr 24, 2018 1:14 PM
(4670 views)

I need a little bit of statistics help. I've been working on a project that has yielded inconclusive results based on my sampling. My leadership team wants to press forward nonetheless and is planning to sample from the line at a set frequency to "Verify" what I could not on my own. I need help estimating the probability that they will find what I couldn't find based on their increased frequency.

Here are some specifics.

- What they are looking for –

- Falsely accepted product (Underweight food)
- Range of out of spec product - >0 and <36.4 grams
- Sampling frequency - 12 bars every 30 minutes
- Population frequency - 10,500 bars every 30 minutes

- Based on a stable normal distribution, I theorize about 1.03751% or 109 bars out of the 10,500 bars during that time period will fall within the out of spec category.

The hope is that by sampling at the frequency that production has chosen, if they aren't able to find any out of spec bars. That will then show that the improvement that I’ve been working on has not increased the risk of falsely accepted product through the measurement system. My feeling is that even after going through this study and perhaps not finding any defective bars, that they may not have found any defective product due to sampling size alone.

I have to imagine there is a statistic out there that can help me predict how many samples I would need to collect to find a defective product.

1 ACCEPTED SOLUTION

Accepted Solutions

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Your question is the most frequently asked question of a statistician: what n should I use?

The most frequent reply is "it depends."

There are numerous web sites with zero failure acceptance plans that require AQL, acceptable quality level, and other details to be defined before using a formula or a table for N. A very simple one is http://asq.org/quality-progress/2007/11/basic-quality/zero-defect-sampling.html.

However, I recommend you find your company statistician or maybe work with the local university statistics professor. There are many details that would need to be discussed before providing an answer (it depends). For example, how is the sample collected: the end of line from multiple production lines? If you collect 12 samples every 30 minutes, does that mean all 12 were collected at one time? Will your plan capture the multiple source tools? Does the bar weight depend only on one tool set or a series of tools? [Since you said bars, I was thinking of coated granola bars, yumm! For a scenario like that, the weight problem be due to the bar "press" or the coater.]

Do you have control charts/monitors for each of the tool processes? If yes, what is their stability? When you assume a random normal distribution, have you looked at the data by time and tool and is that a reasonable assumption? Just like cancer studies on nude mice, sometimes testing/sampling from the most likely to fail can provide information, and other times it can miss valuable information. If you have many tools, you could use simulation to "what if" a bar was created from the series of tools on the low, low, low end.

When you have seen failures before were they truly random or clustered in time?

Many sampling plans, especially if something has changed an there is no true baseline, use a double sampling method. For example if one of your samples is far away from the spec limits (both means and sample standard deviation) , sample as usual, but if a sample is "near" the spec limit, increased sampling and testing.

I wish I could be more specific, but after many years of statistics and supporting a manufacturing line, N alone is never the complete answer. And if you review my questions, you'll notice they are asking details about what is known, the factors that influence any sampling plan.

I have no one size formula, but maybe some of these questions give you some ideas and avenues to pursue. With the simple ASQC rule of thumb provided in the link.

Good Luck!

2 REPLIES 2

Highlighted

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Your question is the most frequently asked question of a statistician: what n should I use?

The most frequent reply is "it depends."

There are numerous web sites with zero failure acceptance plans that require AQL, acceptable quality level, and other details to be defined before using a formula or a table for N. A very simple one is http://asq.org/quality-progress/2007/11/basic-quality/zero-defect-sampling.html.

However, I recommend you find your company statistician or maybe work with the local university statistics professor. There are many details that would need to be discussed before providing an answer (it depends). For example, how is the sample collected: the end of line from multiple production lines? If you collect 12 samples every 30 minutes, does that mean all 12 were collected at one time? Will your plan capture the multiple source tools? Does the bar weight depend only on one tool set or a series of tools? [Since you said bars, I was thinking of coated granola bars, yumm! For a scenario like that, the weight problem be due to the bar "press" or the coater.]

Do you have control charts/monitors for each of the tool processes? If yes, what is their stability? When you assume a random normal distribution, have you looked at the data by time and tool and is that a reasonable assumption? Just like cancer studies on nude mice, sometimes testing/sampling from the most likely to fail can provide information, and other times it can miss valuable information. If you have many tools, you could use simulation to "what if" a bar was created from the series of tools on the low, low, low end.

When you have seen failures before were they truly random or clustered in time?

Many sampling plans, especially if something has changed an there is no true baseline, use a double sampling method. For example if one of your samples is far away from the spec limits (both means and sample standard deviation) , sample as usual, but if a sample is "near" the spec limit, increased sampling and testing.

I wish I could be more specific, but after many years of statistics and supporting a manufacturing line, N alone is never the complete answer. And if you review my questions, you'll notice they are asking details about what is known, the factors that influence any sampling plan.

I have no one size formula, but maybe some of these questions give you some ideas and avenues to pursue. With the simple ASQC rule of thumb provided in the link.

Good Luck!

Highlighted
##

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Get Direct Link
- Email to a Friend
- Report Inappropriate Content

Re: Estimating the Probability of finding defective product

Hi, mmeewes!

I totally agree with Georgia: you should engage a statistician.

In my experience, Management understands costs better than probabilities.

As such, check this article out: Level of Quality for Minimum Cost of Manufacture of a Specification

It is about loaves of bread instead of bars, but you get the idea...

Good luck

Article Labels

There are no labels assigned to this post.