Choose Language Hide Translation Bar

Process Capability decision flowchart and Workflow for Optimal Customer-Supplier Outcomes

In today's complex high-tech manufacturing landscape, with myriad smaller suppliers and multicomponent systems, establishing robust supply chains is crucial for large (high-volume) customers. We require tight tolerances on LSL and USL. Smaller suppliers are usually single-source, and given the small volumes requested and limited resources, our specification requirements’ negotiating power is limited. 

A JMP-driven process capability decision flowchart addresses these challenges. By utilizing the normal approximation approach, we can effectively assess process capability, ensuring we consider all data points to avoid missing critical information. By using normal approximation (e.g. Distribution (Fit All)/Life Distribution), we find the best (GoF) curve fit to identify and address normality violation modes to pursue process improvement.

If Ppk < 1.33, we determine whether it's a spread (Pp) or a k-shift problem. For k-problems, we conduct a one-sample t-test against the target and if it fails, equivalence testing of the mean at specified thresholds (margin and alpha). For Pp problem, we conduct one-sample equal variance tests against spec range and. if it fails, equivalence testing of standard deviation. 

Equivalence tests (Fit Oneway) incorporate negotiating thresholds beyond standard hypothesis tests accounting for business models and quality risk agreements between suppliers and customers. If the Pp or k-shift requirement is not met to address the problem, we collaborate to conduct process improvements then reestablish quality requirements. By using Workflow Builder to systematize our flowchart, we ensure consistent decision making and establish mutual trust.

Our aim is joint profitability and growth, borne from a transparent, systematic, and continuously improving problem-solving approach.

 

 

Thanks everybody for joining me. The title of this presentation is the Process Capability Decision Flowchart. I'm Patrick Giuliano, and my co-author for this discovery presentation is Dr. Charles Chen.

All right. This is going to be a bit of a systems-level presentation. There's plenty of JMP baked into this, but a lot of the analysis, in fact, all of the analysis is going to be documented and will be located on the Discovery Project page for your review. Overall, the objective here of this project is really to create a decision flow chart to really evaluate process capability end to end in a high-tech manufacturing environment.

In the sense that overall, that we're looking at a very systematic, data-driven approach, a key emphasis being on an effective means of normally approximating and estimating statistics, and also meaningfully and effectively doing root cause analysis so that we can actually improve something. Our scope here is adopting the DMAIC project methodology, which is very classical engineering, six sigma approach. It is also very systematic and data-driven, and we want to demonstrate through an effective flowchart to guide the selection of appropriate JMP platforms and use appropriate normal approximation statistics so that we can estimate process capability in an accurate manner.

Finally, once we codify the process, we want to leverage JMFs Workflow Builder to standardize the workflow and to effectively record all of the steps so that anyone can reproduce the analysis from the beginning. Just for clarification here throughout the presentation, PCFC is the process capability flowchart, and that should be clearly differentiated from any kind of flow or workflow associated with JMP, which we would refer to as a JMP workflow or workflow builder.

Here the slide really is talking about what the inspiration for this project is. In fact, this project here you can see in red down here, indicated by the JCT-13, is a part of a project scope that's much broader and under this JMP University umbrella, which was conceived by my co-author. The idea here is really for this presentation to be in the context of having a champion who leads the methodology and drives it and refines it.

Each respective project feeding into this JMP University concept would be associated and interplay with this particular project. What you'll see interwoven through this presentation is some of the other elements of root cause analysis and data-driven decision-making that overlap with or are complement to measuring process capability in a manufacturing environment.

The JMP University concept is really organized around three stages. One is working with core team leaders to define the scope, the map, and the scorecard. Demonstrating the core team project, and this being one of the core team projects, process capability flowchart. Then presenting the project either in a discovery summit format or in other internal company formats, and then scaling up by basically taking this project, and teaching other people on the ground at a particular manufacturing site how to implement it on their particular manufacturing or project scenario.

In keeping with the DMAIC tradition, we define the problem statement and the problem objectives in a very prescriptive way. I won't go through this in detail. You can read this later. But again, in the high level, the objective is to build this flowchart and also implement a scorecard rating system on the ability of the engineer or the analyst to follow the flowchart. That's the primary objective, and that's number 1, really developing the methodology.

Then number 2 is the scorecard template, which is the methodology for consistently executing this flowchart and then developing the end-to-end workflow in JMP Workflow Builder.

One of the themes that we'd like to emphasize here in the context of defining the problem and also expanding it and working through it, is effective leadership, which involves cross-functional decision-making and building teams with stakeholders. Consensus decision-making, a key tool for that is following a flowchart or a process map. In this way, everyone can support and agree on the decision-making.

We focus on the flowchart with the additional aspects that we've outlined here. Ultimately, this allows us to deploy a decision-making framework on the ground to additional people so that we can effectively scale up the decision-making framework so that it's consistent.

What feeds into the define phase? Since it's such a critical phase, it's really about problem definition, which in the DMAIC tradition is so important. The way we like to think of it is through what we call as a JMP body diagram.

There are basically three buckets. We can think of this as eyes, the brain, and the hand. We call this the JMP eyes, JMP brain, and the JMP hands, long-term vision, data-driven critical thinking, and hands-on practice. To achieve a long-term vision, we have to start by clearly defining the problem, critically assessing our objectives, and mastering the JMP tools to test and validate our concepts. This is the first theme, head, eyes, and hands.

This approach really connects with the second theme, which would be JMP nose, ears, mouth, and legs, which emphasizes presenting data effectively in JMP for clear, objective review by stakeholders.

Of course, when we present data, flexibility is key. That's the third theme here within the context of this JMP body diagram. JMP decision arms, decision and negotiation power needs flexibility and the ability to be flexible but rigid. Feet, the ability to stand firmly on decisions that are data-driven and supported by JMP analysis.

In order to do this, we need to be prepared to adapt and use additional analysis or new presentation methods of the same analysis or the same data to support our sound risk-based decision-making in our quality environment. All of this is really anchored by a deep desire to improve oneself and to improve the processes and the people around oneself. This is the JMP heart, the DNA, the passion, and the belief.

What are we talking about here? Here we're elaborating a little bit more on the define phase. What I haven't mentioned in the context of the prior slides are voice of the talent, voice of the process, voice of the business, and voice of the customer. It's helpful for us to think about all of the key stakeholders in this high-level component format. We can use something called high-value problem, high-value opportunity analysis, which is HVP/HVO.

Companies often start by assessing their maturity levels in this way, and we can identify guidelines for continuous improvement in general, and we can feed that into our quality maturity process, which leads to effective quality management in the quality management system framework. This is really another methodology for a holistic approach, which should incorporate analytic tools like measurement systems analysis, statistical process control, and process capability assessment, which is the focus of this presentation and this analytic methodology.

Here's a slide that gives you an idea of what this quality maturity framework might look like if we were to plan that out. We can establish process quality maturity. We can identify design, new product introduction, and manufacturing components on the right, and we can identify toll gates, or you could call them stage gate reviews, where as a cross-functional team, we would determine whether or not we have certain components at each level in each phase, be it design or new product introduction or manufacturing.

Each of our objectives and deliverables can be clearly laid out corresponding to the maturity level in our design or process. A big theme, my co-presenter loves to say think big, start small, act. Part of this small starting is detailing what we need so that we can act on it.

Here's yet another slide on the define phase here in relation to this project, and here the idea is practically, how do we translate these four voices, VOC, VOB, VOT, VOP into critical to quality attributes? Well, some of the tools we can use are strength weakness opportunity assessment, which is over in the lower right-hand corner. Or we can use the SIPOC methodology, and you can see an example of that on the lower left-hand corner: supplier, input, process, output, customer.

The example on the lower left, the context provided is actually in building a DOE model. The process aspect of the SIPOC actually includes refining the model itself. The SWOT analysis example. It's a different example, but we can think about it... It's a different example in this context relative to the project, but we can think about this in the sense of outputting from the scorecard that we're going to be using to basically grade the consistency of the flow chart analysis among analysts. We can use that in a SWOT framework.

Taking this back to our project, if we think about the define phase, we're really more focused on stage 1 today, really demonstrating the methodology for a rigorous PPK analysis.

Here we move on now to the measure phase. Again, the key component of this measure phase is the development of a scorecard so that we can systematize the process capability flowchart and help ensure consistency among individuals, using it to reduce their bias by scoring our own performance, each person's performance, and our colleague's performance with a scorecard. We have to keep in mind that we don't necessarily have an objective standard. You can think of this as well. If you were to rate something a one, in terms of your ability to execute on it, what does that rating mean? Or 3 or 5 or 10.

We're not just talking about ensuring consistency among oneself in rating, but we're talking about reproducibility among raters. Another way you can think of this practically in a little bit more detail is perhaps somebody might be biased towards rating themselves higher because they may have the perception that they're adhering to a standard better than maybe other people. Or perhaps there's a particular expert in a particular domain, such as process capability, might rate other people harsher than another rater because they have a lot more subject matter expertise to be able to piece apart and scrutinize what another person is doing. We have to develop a standard. That's a key component to the scorecard rating assessment.

Part of this decision-making flowchart which we'll focus on in more detail, is about assessing what we call violation mode is in normality. Here we're looking at the data and saying we expect the data in general to be normally distributed, but what are the mechanisms by which it may not follow that assumed distribution? What are the potential signals that we can see in the data that would indicate that there's some special cause happening in the data?

The six key modes we've identified is potential outlier problem in the data, skew problem, a bimodal problem, a kurtosis problem, a measurement resolution problem, and a sample size issue.

In the context of the case study used to flow through or work through the flow chart, we actually did two cases, number 1 and 3. Number 1, we did a composition of simulated data, where we had our outliers that were to different extents, 5% each in what we're calling a regular, marginal, and extreme distance from the median, you could say. Then we also looked at a bimodal case where 80% of, again, simulated data here, nothing company-specific or proprietary, and where we actually looked at a distribution where 80% of the data fell into a single mode and another 20% fell into a separate mode.

Here is what we're calling a normality violation mode table. You can think of this as we look at our particular data set, and we identify statistically what the issue is, just like was described in the prior slide, these six potential violation modes, outlier skewness, skewness of a different character, skewed, right skew versus skew in both directions, bimodal, heavy or light tail, which refers to kurtosis, measurement resolution, and sample size.

The first thing we would do in the JMP philosophy is to look at the appropriate visualization tool, look at how we would deal with measuring the middle of this distribution that has a violation, how we would deal with measuring the spread, what we would expect in terms of diagnosing the shape in terms of the skewness and kurtosis statistics, what fit we would apply in JMP, and then how we would approximate normality in terms of the mean and the standard deviation if we were to use a normal approximation.

In each of these cases, we've really thought carefully about what a valid normal approximation statistic would look like and why. We've done simulation to help us understand and confirm how these statistics behave in these different violation mode contexts.

Key is violation mode, visualization. The first two columns confirm whether we have a violation, what it looks like again. The last three columns, we're not just identifying the problem, but we're also, in doing this analysis, helping to find the best estimate of PBK for the particular normality violation mode. Then later, you'll see and try to pursue root cause analysis and process improvement, so we can remove or eliminate the violation mode. This is really in my mind, the key slide that underpins this project.

What you'll see on the community is a Word document that is like a notebook that steps through this analysis in the context of the two case studies that I alluded to previously. This is a lot. What are we looking at, especially for somebody who hasn't seen this before? I'm trying to animate this. The first, we're calling this group, is A.

This is the first phase. Actually, as you can see, be consistent with the terminology group. This is really about setting up the success criteria around PPK and doing the normality test and determining what the statistics should be to diagnose the distribution, running a goodness of fit to find the best-fit distribution, doing bootstrapping, which we can do in certain contexts, which I'll allude to a little bit later, looking at that normality process violation mode table that I just showed you, and using it as our rubric, and then using that to ultimately drive process improvement to reduce or eliminate the normality violation modes that we see. That's the first group, A.

Once we do that, we move on to either group B or C. Group B is the situation where based on the PPK analysis, we are missing the target, meaning our process middle, or where our process is operating in terms of its middle or center is off of target. We have a two-sided specification, and then equidistant from the lower and the upper specification limit, we have a middle. If on average we're off that middle by more than 20%, then we need to look at this, what we call missing target issue.

We would step through this workflow to negotiate that issue. Similarly, when we talk about group C, we're talking about a spread issue. This means that perhaps we're on target, but we have too much variation. If we have too much variation, how do we diagnose that? How do we negotiate that, assess that, and then ultimately try to drive for process improvement? Group A, B and C, so this is the flowchart.

Group A, this is just a zoom-in on group A. I think we've talked quite a bit about group A, and I've alluded to the case studies, which you'll find in our project page.

We start with a PPK assessment after we go through the group A, and we choose an appropriate normal approximation method to estimate the PPK. Then if, assuming we have an issue, which ultimately that's the whole purpose here, we assess whether we can resolve and address a missing target issue or a spread issue.

This is group B in more detail. Just walking through this, what we're saying we would do is we would do one sample mean test against the target in JMP in the distribution platform. If we didn't pass that test, if we have our p-value was below the 0.05 criteria or whatever alpha criteria we choose, then we would do an equivalence test against a threshold.

If we didn't meet the equivalence test against this practical threshold that we determined, we would have to do additional process improvement until we met that equivalent test criteria against the threshold that we set.

This is going to fit into the group C analysis here very similar to the prior one. Here, if we had a spread issue, if our process spread statistically was less than 1.33, we would do a one sample equal variance test against a spec range divided by 8. If we didn't meet that, then we would do an equivalence test of standard deviation. Assuming we didn't meet that, we would go into process spread improvement mode, where we would look at practical improvement on the ground for the process before we went back to this decision-making flowchart to reassess.

You can see the process improvement phase is a key component, and root cause analysis with domain knowledge and careful assessment against the normality violation modes is critical. This of in line with this robust process methodology.

Here's an example of that scorecard that I alluded to before, which is critical, and consistency of analysts following the Flowchart. Each step of the flowchart is labeled. When these A, B, and C groupings, like we did before, the first column indicates essentially the flow chart step and name, the what, the objective of doing the step is detailed here, the why, and then the suggested tools for executing that step, the how, are detailed here.

This is another way, and perhaps a more detailed way of reframing the flowchart and also ensuring that a fair assessment of execution on the flowchart has been made. The idea is we take these indices, these As through Cs, and we label the flowchart with those specific line items to be able to increase our consistency in how we execute this flowchart.

We talked about repeatability, reproducibility. One of the key tools we're going to use for that, and in industry in general, we use is Gauge R&R analysis. Well, in this case, since we're doing a peer review and we're using a scoring methodology, we're not going to use a continuous grading system. We need something that's discrete or attribute.

In fact, because we're using a scale, whether it be a 5-level scale or a 10-level scale, in this case an assumed 5-level scale, we need to treat the data type as ordinal in JMP, and for that purpose we would probably be doing some ordinal Gauge R&R analysis using contingency table analysis in JMP's fit, contingency personality and fit Y by X.

This is an example of what that might look like. By the way, there are other tools for ordinal Gauge R&R. Kendall's coefficient of concordance is one. Testing significance of Kendall's coefficient of concordance is another one. Fleis Kappa can also be used for ordinal Gauge R&R, although it's typically used for attribute Gauge R&R. That is something that JMPs variability attribute gauge charting platform leverages and uses.

Here this describes more on root cause analysis and what we would be doing systematically with respect to the analyzed phase. We talked a lot about utilizing the group A to validate potential Xs based on normality violation modes and to understand dissociated process failure modes through looking at the distribution and also using JMP's dynamic visualization tools in the context of distribution.

We hit a little bit on this, but this goes into more on the root cause analysis, identifying the most important Xs that explain the normality violation modes. You can see again, we're bringing in the three groups, A, B and C, and renominalizing the violation modes and their metrics. Of course, in general, root cause analysis and SME knowledge are fundamental for us to be able to assess and mitigate these violation metrics in our JMP problem-solving paradigm.

This slide is just to give you a further example of revisiting the project scope and success criteria and what that looks like. This is a different context. This is in the context of calculating statistical tolerance intervals, but the mechanism of documenting and systematizing the decision-making process would be similar. Again, in a similar mechanism as this PPK calculation methodology, where we're either using the existing data and negotiating by going through the methodology in group B and C, and/or we would be going back, improving the process, removing the bad data points by identifying root cause and eliminating their existence in the process, and then going back through the analysis again.

This is a very analytic and prescriptive example of how we might revise specification limits and what the statistics would look like under the current and the new methodology, including actually in this specific instance, focusing on the Gauge R&R P to T ratio in a variable Gauge R&R context. Again, not in the context of this flowchart, but in a continuous measurement context. Here's a DFMEA occurrence and detectability table. This is another key analyzed tool which we can use. We wanted to showcase that as an example.

Then here we're getting into the improved phase, high level, and we're hitting on similar themes where we're removing special variation due to known root cause. In this case, we can simulate that situation by screening out data points in our simulated data, and we can build our methodology off of that simulated data, which is, in fact what we did in our case study.

Of course, we've defined a strategy for handling each violation mode, which we've codified in that table that I alluded to in group A. Then in the improve phase, we can also deploy continuous improvement initiatives. Again, we do that in a prescriptive way with deliverables, tasks, owners, define subject-matter experts, and completion dates.

Here's an example of just some of the tools we might use in the design and process verification validation context for the improved phase, some of the JMP tools we might use.

Then here's where we're getting into the knowledge transfer, the control phase, where we take the flowchart, we train on the statistics, we train on examination of special cause variation, we train on focusing on the JMP analysis, and the accompanying workflow builder methodology. We document all the training, and we develop leaders to be able to consistently follow the flow chart.

I see. Another key component here in this phase is a control plan, which is another way of measuring how well we're following the analysis methodology and recognizing people who are performing very well in relation to that control plan, quantifying success and recognizing that success among the folks that were deploying the analysis framework to. This slide really highlights the concept of developing this workflow builder package. This is quite practical, actually.

What we're essentially doing is we've developed this flowchart. Working through this flowchart, we save every bit of the analysis as a save data table script in JMP to the data table. Then we open a new workflow. Open the table, open each of the saved scripts from the table after hitting record at the beginning of launching the workflow, and then we stopped the workflow. This is just an easy way for us to capture the workflow in JMP in one shot consistently.

Here, this is expanding beyond this PPK assessment. Here we're actually looking at Gauge R&R process capability and stability and really Gauge R&R and Gauge Quantification. This is very much complement to the PPK exercise. This is the topic of a separate project with a similar methodology and frame of implementation.

In conclusion, we want to emphasize the value, explain the importance, and provide the relative context and mechanisms for making a consensus decision using a process capability flowchart. An example may be Gauge R&R related, but we've included... Or maybe Gauge R&R would be complement to an example. But we've included an example without considering the context of Gauge R&R as case studies within the scope of this project, as I described before, and as will be included in our project package on the community.

As far as future work, we want to add a Gauge R&R in our portion in our next process capability flowchart. In the spirit of continuous improvement, we want to refine the flowchart even further because we know for sure that measurement capability impacts process spread process variation. Later, we can also connect this K-shift problem to the robust design concept and for the robust design tools.

Myself and the co-author have to think further about connecting different core teams in the context of the JMP University framework, which I described at the beginning of the presentation to create joint opportunities to further improve and refine this problem-solving methodology.

Thank you very much for your time, and if you have any questions, I invite you to reach out to us. Thanks again.