Let me start by acknowledging I have a deterministic bias (vs. probabilistic). Here are my thoughts.
If I am trying to come up with an internal spec range of particle size for my product that I can be confident my process would deliver in the future, should I use the control limits range (UCL and LCL) or the tolerance interval?
I don’t know your situation well enough to provide specific advice, but have worked many projects involving particle size. I’ll start with your last paragraph. Let’s say you have established a causal relationship between particle size and some performance measure of the product (perhaps you ran experiments) and let’s say particle size affects viscosity. The question is how much does viscosity change over changing particle size? When does the particle size have an appreciable effect on viscosity?
Philosophically, for factors that have a significant affect on the product performance, the optimum target value should be identified and then you should constantly reduce variation from that target value. Of course, Taguchi would argue there is an economic point at which the cost to reduce variation should not exceed the cost associated with reduced performance of the product in the hands of the customer.
See: Taguchi’s Loss Function
Control charts may provide insight into the consistency/stability of your process (see below). Whether the control charts provide sufficient confidence is a function of how representative your sampling is of future conditions.
Tolerances have nothing to do with consistency. They are established to make decisions about how much variation in the factor (in this case particle size) is acceptable before the product performance is sacrificed.
I understand that a tolerance interval indicates a range within which a specified percentage of a population (future value) is expected to fall.
According to NIST, here is the definition of a tolerance interval:
“A confidence interval covers a population parameter with a stated confidence, that is, a certain proportion of the time. There is also a way to cover a fixed proportion of the population with a stated confidence. Such an interval is called a tolerance interval. The endpoints of a tolerance interval are called tolerance limits. An application of tolerance intervals to manufacturing involves comparing specification limits prescribed by the client with tolerance limits that cover a specified proportion of the population.”
(https://www.itl.nist.gov/div898/handbook/prc/section2/prc263.htm)
This is an enumerative technique. Deming (1975) pointed out the use of data requires knowledge about different sources of uncertainty. Use of data also requires understanding of the distinction between enumerative and analytic problems. “Analysis of variance, t-test, confidence intervals, and other statistical techniques taught in the books, however interesting, are inappropriate because they provide no basis for prediction and because they bury the information contained in the order of production. Most if not all computer packages for analysis of data, as they are called, provide flagrant examples of inefficiency.”
See: Deming, W. Edwards (1975), On Probability As a Basis For Action. The American Statistician, 29(4), 1975, p. 146-152
Control limits, on the other hand, represent the acceptable range of variation within a process and are used to monitor if a process is operating within its normal variability. They help detect potential issues if data points fall outside these limits.
This is a misunderstanding of the control chart method. Control limits have nothing to do with acceptable nor is their use solely about monitoring. Points outside the control limits may not be “potential issues”. There are 2 charts used in control chart method, a range (or MR) chart and an average chart. The range chart answers the question is the variation within subgroup (which is a function of the sources or components of variation that vary at that frequency) stable, consistent and therefore predictable. If the range chart indicates instability (what Deming called special and Shewhart called assignable), then you should seek to understand why with strong clues about when it happened in time. The average chart (aka. X-bar) is a comparison chart. It compares the within subgroup variation (expressed as control limits) to the between subgroup components of variation (the plotted averages) and answers the question, which components of variation have the greatest leverage. An in-control average chart indicates the within sources dominate and an out-of-control chart indicates the between sources dominate. Not good or bad.
See:
Shewhart, Walter A. (1931) “Economic Control of Quality of Manufactured Product”, D. Van Nostrand Co., NY and,
Wheeler, Donald, and Chambers, David (1992) “Understanding Statistical Process Control” SPC Press (ISBN 0-945320-13-2)
What I’m unclear about is whether control limits serve as a predictor for future values. If my process is stable, should all future values fall within the UCL and LCL? Then how is the tolerance interval different from control limits?
Only range charts provide insight into stability. And yes, if those range charts are in-control over time, you have evidence to suggest the process is stable. Keep in mind you are only assessing the stability of the components within subgroup. Average charts are comparison charts. You can’t assess stability by comparing components of variation. To assess stability, you must compare the same components to themselves over time.
Tolerance intervals are dependent on the fixed limits you impose.
"All models are wrong, some are useful" G.E.P. Box