I have a table with a continious process data. I want to calculate the CpK at every measurment, so that I can follow the variation in CpK with time. Is there an elegant way to do so ?
Before someone tries to script this solution, some thoughts about Cpk sample size issues:
Cpk confidence intervals for small samples are huge until you get to 20, 200 or even 2000 measurements. (Nominal Cpk of 2.0 might be <1.0 or >3.0 for example with few samples.) Just because computers use 2 or more decimal point results does not mean that it means anything.
You might have to use a startup-set of 20 (or 200) typical measurements from history, then re-calc as each new measurement is added.
However Cpk changes very very very slowly which is why many companies only report it quarterly, of if high speed mfg, monthly, or weekly but with stated sample size and upper and lower confidence levels. Just my opinion based on volume mfg practices.
Better perhaps to use your measurements in EWMA or other SPC chart with OOC rules and limits, and then show your moving Cpk (based on moving mean and moving stdev!) on second chart below the SPC individuals charts? JQT had so many discussions on Cpk confidence interval issues that Lloyd Nelson banned articles on Cpk for a full year...long ago as I remember.
Thanks for your wise guidence, very helpful.
I have another question. If there is a process which is being monitored long term and the results are not evenly spaced in time (of what ever is used on the X-axis) should we start to recalculate the sigma and mean after lets say after a certain (long) gap ?
Cpk and Ppk (short and long term...or within vs all-variation) are measures of UNDERLYING CAPABILITY vs spec limits and not useful without SPC charts to assure process behavior is NORMAL or at least not skewed heavily. It is not INSTANTANEOUS measure of short term capability changes as the SPC chart would likely show mean or sigma OOC signal with sparse or internittent data.
So IF you have at least 25 points after long gap of no data, you might recalculate the SPC limits just to see how much they have shifted or gotten more variable. But until you have perhaps 100 points I would not bother to report a change in capability....unless REQUIRED in which case modern SPC software can give you Cpk values as well as mean and sigma or median and IQR for any SPC chart that meets minimum sample size....and further, they give CONFIDENCE LEVELS on high and low side of nominal Cpk or Cpm (related to target not just limits).
Start with SPC charting, including EWMA as well as X and moving range charts...as one option, depending on type of process behavior (from exploratory data analysis, or process capability STUDY during well controlled period. Check other discussions on this topic as it has been discussed hundreds of times in ASQ.com (especially their Journal of Quality Technology), Isixsigma.com, and linked-in discussion groups.
Both SPC %OOC per quarter and Cpk per quarter are often studied for hints as to process behavior with SPC charting.
Long gaps in process results can simply mean you are monitoring the PRODUCT rather than PROCESS which could be running multiple products.
So Process Cpk for all products using real process outputs and inputs in a MFG DB can be useful as it gives bigger sample sizes (for Cpk assessment of underlying capability) as well as measure of alarms and mistakes that are actionable (part of cost of quality), Job shops are tricky, and Wheeler has studied SPC for short run manufacturing (which has mention of Cpk). Blogs by Wheeler and others such as Isixsigma.com can answer special questions, such as tool wear trend impacts, short run mfg, non-normal process behavior. What SPC software do you have?
Sorry for the delay in responding to you. I use the JMP to plot the SPCs ... (is there some thing else that one can use). For the calcuation of the moving CpKs, I export the data to XL, do the calculations and then export back to JMP. This is painfull, however since JMP does not have the option to reference indivual cells like in XL, I have no other option. Do you know if there is a work around for this ?
What other addin or sofrware to you use for such work ?
Thanks for sharing your valuable know how on this.
Have you plotted the raw data in JMP Variability Plot? That can be used to break down the many components of variation visually, and to some extent numerically using the random effects model usually. Remember your goal is process and thus product MPROVEMENT, not a Cpk number that moves up and down often.
Why do you need to recalculate Cpk for every measurement?
Is it mandatory?
You already have SPC chart to show single point excursions and trends in the process behavior.
You could report these OOC excursions as "% OOC" rather than use what I think is a misleading rolling Cpk that hides the perhaps actionable points OOC.
Cpk is an UNDERLYING capability index.
It is used to ESTIMATE the PPM failure rate based on those spec limits.
It has a "short term" metric based on subgroup standard deviation, which I find relatively useless except for focusing improvement efforts when the 'in control" behavior is still not good enough given action taken only on OOC points.
And it has a "long term" metric based on ALL variance components, including temporal variation over at least 25 parts.
Daily reports on %OOC across all SPC charts can help prioritize short term shop floor efforts, while long term Cpk can help prioritize INVESTMENT efforts that focus on the improvment of COMMON CAUSE variation that SPC charts miss on purpose.
That long term Cpk is the only one that my clients report montly or quarterly, but not updated for each SPC chart point.
And when improvement action is suggested a long-term Cpk less than 1.5 or 1.67 or whatever goal your industry has set as target, you then need to look back at the SPC chart raw data and find the VARIANCE COMPONENTS that can be attacked by CI team. You might use the JMP Variability Plot to visualize and even enumerate those VC's. Example of variance components might be part to part, variation within part, batch to batch if batch process, and temporal variations which could be hourly, shift to shift, or PM to PM, or day to day, week to week, some seasonal measure, etc. That Variability Plot done in nested format for most production subgroups, or crossed format for DOE studies, can be very helpful for CI teams. We never report Cpk's without this kind of background Capability Study.
So again, how do you plan to USE a moving Cpk? It as same problems as moving range and moving average charts, in terms of actionable interpretation of variance components which show up better on "muilti-vari" or JMP Variability plots...in my opinion. If its required, challenge that request until someone explains what action would be taken for the many ups and downs it shows often, when actually the UNDERLYING LONG TERM CAPABILITY has NOT really changed. Deal with short term excursions using SPC rules and action plans that attack SPECIAL causes of variation using CONTROL limits. Deal with long term process behavior vs SPEC limits (rather than control limits) if necessary by attacking the COMMON causes of variation which the SPC charts are designed to MISS so you can focus short term on OOC points. CI is a two-step process, get rid of special causes, then re-assess capability and if marginal attack common causes using the JMP Variability Plots for visualization, and DOE methods to study and improve those many other variance components.
This is just suggestion based on my experience, not any particular text book.
If you report Cpk too often, you will mislead management and customers who will demand that you EXPLAIN every up or down value which may only be reflection of OOC points for down shifts, but may actually be real for up shifts which you will only really know after much longer time! Metrics should be realistic and actionable and to be actionable must address EACH source of variation, not some "average" which only leads to more confusion.
But remember, I am not a SAS expert. I am simply a high volume manufacturing process engineer that has USED JMP for more than two decades to drive improvement efforts, both short term and long term where big bucks are often involved to improve common causes when spec limits are tight.