Life on the manufacturing floor can be painful.  Measurement Systems studies can provide a false sense of security.  Sometimes, designed experiments can start off well, but end up with mediocre results.  Or, our SPC systems can overwhelm everyone with false alarms. The unfortunate result is lost time, wasted resources and rapidly diminishing colleague support.  And to make things worse, as damage to the reputation of statistical methods accumulates, the availability of resources goes from decent to dismal.

The good news is that we can avoid these unpleasant scenarios and encourage the use of statistical methods across organizations by using visual workflows, error-proofing steps and recommended techniques that were refined during a multiyear Pyzdek Institute study.  During the study, we collected insight from clients, gained understanding of the root causes of statistical method disappointment and identified effective preventive measures. The study uncovered many root causes but also revealed an unexpected epiphany: a leading cause of disappointment was using statistical methods without considering important business perspectives.

In this presentation, we'll include five failure case studies along with the steps needed to prevent recurrence. We also make a few controversial recommendations that we hope will trigger interesting, perhaps heated, discussions.  For example:

  • Never run a designed experiment as a stand-alone effort
  • Teaching MSA, SPC and DOE as separate subjects may not yield the best organizational results
  • Beware the use of Optimize Desirability
  • You may not know what you already know
  • Meeting specification is not the operational goal

By the end of the presentation, attendees will have a better understanding of:

  • Why the context of business risk and workflow error-proofing are key to organizational acceptance of statistical methods
  • How to learn from the tribulations of other manufacturing organizations
  • How to ask Scotty to beam us from The World of Statistics to The World of Business at just the right time

 

 

 

Welcome everyone to At the Corner of Statistics Road and Business Decision Boulevard. Just a quick shout out to my co-presenter, Juan Rivera. Juan and I put this together as a team, but he decided to let me go ahead and do the presentation. Just a very, very quick overview of Pyzdek Institute based in Tucson, Arizona.

We got some people stationed in different places around the world, and we teach engineering statistics, Lean Six Sigma, many clients, many industries. That's actually an important point because what we're going to do today is to share some experience that we had with some of our clients from different industries and hopefully help people learn a bit. Finally, we're a JMP partner. Very pleased to be a JMP partner, and we'll move on.

Just to put this into perspective, this presentation is actually a sequel back in 2020, JMP Discovery in Tucson, did a presentation called At the Corner of Lean Street and Statistics Road. This is a bit of a follow-on. That recording is available at jmp.com, if anybody would like to see it. Also, a couple a few years after that, in 2022, Juan and I did a presentation at the ASQ Lean Six Sigma Conference, Error-proofing for Design Experiments. Again, learning from our clients, trying to help other people learn from said mistakes.

The presentation goal? Well, yeah, we've been looking at this for quite a while. It's a multi-year look, multi-year study. What we're trying to do is answer this question, why do statistical methods go off the rails and end up with a bad reputation? We see that ourselves.

When I work full-time, the boss has got mad because I did something out on the plant floor. How do we learn from those mistakes and come up with some preventive measures? That's the whole idea of we're trying to accomplish today.

Very simple format. I got five main findings to share with everybody. Each one has a case study. All we're going to talk about is what the customer did, the consequences that they suffered from what they did, and how we were able to help them resolve the issues and prevent things from going off the rails a second time.

Here it is, Finding #1. Summarize. It's all about the business, not the statistics. A common theme that we saw. The root cause of the problem, we can summarize as what we've got here is failure to see the big picture.

Here's the case study. First case study. This company makes tapes that are used in energy cable for water blocking purposes. That catch up on the top, the photograph on the top, really cool. In the presence of water, everything swells up, and it uses water to block water. Really cool stuff. It has a super-absorbent polymer blended with nonwovens and binders and the like.

The goal was to optimize a product for saltwater performance. In fresh water, these tapes swell very nicely and not so much in saltwater. They had this technical challenge for somebody who wanted to bury some power cables in an area where there was some groundwater, some brackish groundwater.

They had controllable factors. They could control the weight of the nonwoven substrate. They had some control over the superabsorber particle size. It's available in different sizes. They had different bonding agents to stick everything together, how much bonding agent they put in, and they also had to add a corrosion inhibitor so that water gets in there. The corrosion inhibitor prevents erosion.

A fairly simple, straightforward design, really important product because if water gets in cable, all hell breaks loose. Their response was to minimize the ingress of a certain salinity of saltwater into a cable, according to this sketch. Grip away the teeth, expose the water blocking tapes, apply some pressure using an elevated water tank, and see how far the water goes into the cable. Of course, they don't want it to go very far.

What they did, ran a design experiment. Great. If you look at the prediction profiler, have the substrate weight, particle size, bonding amount, the inhibitor amount, and the bonding type, the type of glue they used to stick everything together.

They ran an experiment, hit maximized desirability, and they chose Bonding Agent 2 because they got less water penetration with it. They modified the design of the tape, and they moved on to the next project. A routine case study, and a lot of people do something like this.

However, a couple of months later, they got hit by three bolts from the blue. First of all, the particle size of the superabsorber was drifting around, and they were unaware of it. There was some special cause variation in the weight of the substrate nonwoven caused some problems.

To make matters worse, safety manager got notification that certain elements, certain pompons, no longer acceptable in the European market. Safety manager issued orders to immediately siege the use of Bonding Agent 2, something they had just chosen to optimize the product design for saltwater performance. Wow.

There were consequences. Panic switch. I got to switch from Bonding Agent 2 to Bonding Agent 1. They started an incoming material test. People were blaming each other. All heck broke loose. Here's the point, it's the statistical methods and the design of experiment that took the reputational hit. Hey, that DOE didn't work. Why is that?

Just as a side note, just to share some other case studies, there are some other unintended consequence that we've learned about by using optimized desirability. We've seen companies run a DOE, optimize desirability, but they ended up with a large increase in energy costs, or they made the work area dangerously hot for operators, increased machine breakdown in maintenance costs. Sometimes their customer, the person they're selling their product taken by surprise, not just noises from machinery, and so on.

The first preventive measure that we came up with, and again, we were trying to help this customer and use what we knew about from our experience. First preventive measure, we recommend to use maximized desirability very carefully. We've seen people get in trouble with it.

In fact, we would recommend it only be used if you have a large number of responses, a lot of complex trade-offs between financial responses, technical responses, commercial responses, and so on. I urge a little bit of caution.

Second preventive measure is to acknowledge that design of experiments is a discrete exercise. Design of experiments is one of the greatest methods invented for improving processes and understanding processes and optimizing process, but it is a discrete exercise.

Had they used SPC control charts, either from their supplier regarding particle size and all of that, they would have avoided a whole lot of pain. We believe that DOE and SPC make a really great team, and they should be paired together whenever possible.

The third preventive measure is to think about the workflow policy. This company had these engineers doing all the right things, doing the experiment and all of that. But there were a few misses. They should have been talking to the supply chain people and so on. What we recommend is to set some policies, corporate policies. You're going to run an experiment. Here's how you do it. We came up with this idea, let's break it down into phases.

Because at each phase, you probably want to involve different people. For example, this one company said, "We'll use these three phases. We're going to work, do some work, and we're going to evaluate the data we collect, and then we're going to make some business decisions." They ended up creating some visual standards, including error proofing steps.

They started with a small group of people. You got the people out there designing an experiment, collecting the data, so on, so forth. You don't want a big group of people involved at that point. You don't want supply chain involved at that point, and so on.

They then go to the second phase. This is where you start to bring in some additional people, supply chain, safety people, operators, supervisors, and others to evaluate the data. Then to this point is you really want to over-collaborate. This is how we evaluate the data we just collected. We do certain steps. If it's good, we do one thing. If it's not good, we do something else, all to set some policy.

Then in the final phase, this is where you bring in everybody, including sea levels and the like, involve all your stakeholders to make the final decision. If they'd done that originally, they probably would have avoided all that pain.

Fourth preventive measure we recommend, some companies say, "Hey, we'll teach everybody DOE." Probably not the best idea. Each DOE is a stand-alone subject, but instead, teach it as part of statistical thinking. How do we get people to think statistically, not just run design experiments, but look at everything.

Look at raw material performance, look at process centering studies, make sure our instruments are okay. Do I have any assignable cause variation in my process and teach all of these skills, reliability analysis and so on? It's the fourth preventive measure we recommend.

Here's a section summary of Case Study 1. After collecting the response data, it gets study to beam you from the World of Statistics to the World of Business, our recommendation. There's a picture of Mr. Spock the first time he saw the JMP Graph Builder. It's like me. Hey, wow, that's the greatest thing in the world. All kidding aside, that's what we recommend to avoid problems like that one that was described.

Here's Finding #2. To summarize, meeting spec is not the end game. We see a lot of customers chasing their specs, and they're getting in trouble because all they're doing is focusing on their specs. The root cause is, to summarize, in my words, failure to understand Taguchi. We'll talk about that in a little bit of detail.

I just wanted to bring up this quote here because I think it speaks very, very loudly. Everybody from the management suite all the way down to the shop floor should think about the quality paradox in Donald Wheeler's word from his book, Understanding Statistical Process Control. Thus, we come to the paradox. As long as the management has conformance to specifications as its goal, it will be unable to reach that goal. If the actions of management signal that meeting specifications is satisfactory, the product will invariably fall short.

Great words. Our observations, what we learn from our customers, corroborate this quality paradox, as stated by Donald Wheeler. Here's a case study. I had a CNC process, the machining of polycarbonate parts. Company got a large order for a fairly complex part, and they had the usual machinists types of responses. The parts got to be the right geometry, right thickness, the angle's got to be correct, surface finishes have to be correct, and so on.

What they did is when they got this order, they said, "Well, we've got this control over our process. We've got these specifications." They dug up the results of an old process capability analysis. Hey, look, everybody, way back when we had a CPK of 1.357. It's based on 30 samples. Little did they know they probably should have done an SPC pre-study to make sure their process was stable and predictable.

They did not look at the confidence interval of the CPK. That can always be a surprise. Afterwards, they didn't look at the process going forward over time. Again, you get this discrete project. Process capability analysis is discrete, and they then proceeded.

They concluded, based on the data that they uncovered, that all was well. Yeah, look, we've got capability to meet the specs. We saw the CPK greater than 1.3 is some reasonable assurance of success. Understandably so. People talk about that all the time.

They took the order, made the parts, chipped the product, got it out the door, so they could get to the next order. Understandable. However, there's some underlying truth. First of all, the parts that they collected for their legacy PCA study was collected for convenience and really wasn't the good representation of the overall process.

Now, in the plastic machining world, you don't have to really worry as much about cool wear as in the metals machining world. There's some things that go on out there that change with time. They also didn't realize that the lower confidence interval of their CPK index was 0.89. Again, process capability is discrete. There's no timescale.

What they learned a year ago didn't really tell them a whole lot about what the process looked like when they got the order. Little did they know their process was not in statistical control. That's one of the prerequisites is PCA. If your process is in control, these indices really don't mean much.

Another thing is the specifications that they were using, they were arbitrary. They came from a tolerance block, and those specifications didn't really reflect the ability to satisfy the customer. This is a very common occurrence based on our experience with customers.

A few weeks later, the consequence is Customers couldn't assemble some of the parts. They had this machine plastic parts. Part of an assembly, they couldn't assemble it properly. The dispute started over some dimensions and so on. The supplier, the person making the parts, really had no proof of either measurement integrity or process control. They said, "Well, dear Mr. Customer, our CPK index is 1.357." The customer, the buyer, says, "what else you got? That doesn't mean anything to me. What else you got?" They were in trouble. Consequences? High rates or rejections.

Again, the point of this presentation is, it's always the statistical method that takes the hit. Things go wrong, decisions are made, and the statistical methods are sounds. Calculations of a CPK index are fairly straightforward. All of that sound is not under dispute, but it's the method that takes the hit. The reputation of people in production, management, people in sales, and all of that is another failure of statistics.

What do you do about it? You got this problem. The first preventative measure is to understand Taguchi. The Taguchi Loss Function, in the opinion of the Pyzdek Institute, is one of the most important concepts for anybody to master. Very simply, it says if you're on target in the middle, right on target, you minimize your losses.

As soon as you start drifting off target, off the center of your distribution, somebody starts to lose money. That really summarizes what these folks probably should have done.

In JMP, you can visualize it, Taguchi Loss Function. We recommend to companies to do something like this. Take one of their products. In this case, we've got some dimension, we've got targets, there's a constant that you have to adjust until you get the right financial losses, you do some arithmetic, and you can create a table that looks like a graph using the Graph Builder. It's essentially a scatter plot to show people to say, "Look, if our part is 18 millimeters, we get to keep all the money." In nice simple terms like that, terms that people can understand.

By the way, you don't want to call it the Taguchi Loss Function. You don't want to be talking about that with top floor people. It's concept. That's important. Then you can say something like, "Look, if a part is 17.93, it's in spec." There's a lower spec limit of 17.9, but we're losing $108 per part, according to the Taguchi Loss Function.

People understand that. People see that. The point here is to try to get people to stop thinking about meeting spec. I'm in spec, I'm out of spec, and start thinking about how far you are from the target.

A lot of really smart people have been saying that for many, many years, but that message doesn't necessarily get through to everybody. Again, I won't belabor the issue. The other preventive measure is to set some policies. If you're going to do process capability, what do you do? Well, okay, first you got to make sure the process is in control. If not, you find and fix the problem.

Then you ask yourself, are these specs really meaningful? Are they just some things from drawing someplace, a little table on a drawing? Then you go through this whole thought process. Our recommendation is to include really a lot of process behavior cards along the way.

Again, tell it like it is, another preventive measure. This is a drawing. Everyone has seen this. You got your lower spec, you got your upper spec, you got your target. Wheeler calls this the Zone of Benign Neglect. I think he worded it very carefully. I call it the Zone of False Perfection. When you're outside the spec, you got the lower zone of chaos and doom and the upper zone of chaos and doom.

When you think about it, that's the world of meeting spec. You're in one of these two states, you're doing your happy dance, or you're running around like crazy trying to fix the problem. We find that this thought process, this mindset, gets people into trouble.

Other things to consider for companies, JMP has this add-in called the Gage Performance Curve Generator or something like that. Great stuff. Fantastic because it shows something about why you want to be on target. Because the fact is, as soon as you start getting close to your upper or lower spec, measurement noise becomes a deciding factor in customer satisfaction.

As soon as you get close to the lower spec, for example, there's some measurement noise. You may be judging parts that are in spec as out of spec and vice versa. This JMP add-in is fantastic, and we recommend our customers use it, learn it, and learn from it.

To summarize the section, Case Study 2, variation reduction is the goal. However, process capability analysis, process capability indices focus attention and resources on meeting spec, and we see clients getting into trouble.

That's two out of five. Finding #3. You may not know what you already know. To summarize, Finding #3. What we got here is failure to fully monetize our data. That's what we found as the root cause of these problems.

Here's a case study, another plastic processing company. They have worldwide locations, extrusion, injection molding, rotomolding, blow molding, all plastic products, hundreds of machines, a big, big company. Their problem with statistical methods in general is the sheer volume of things that they were doing.

They had independent statistical studies at each plant, different processes. Data storage was really not very organized. They had data files kept on local network hard drives, PC hard drives, thumb drives, cloud drives, you name it. The data was scattered everywhere.

They said, "Okay, well, we realized this problem." They realized they had a problem. "We got all this data. How do we organize?" They tried some manual methods. They tried to simulate data into certain places.

There was this long, long list, supplier data, current state data about whether their processes are sent or not, measurement systems analysis data, you name it. They had it, lots and lots of data. The underlying truth was just too much for any manual approach. "Everybody send your files here. Let's keep the files on this network drive or on this cloud drive or something."

They really weren't maximizing the value from all the data, spending a lot of money collecting the data. They didn't maximize the value from it. There was no data governance either. If one person calls a factor by one name and another person calls a factor by another name, and you're trying to assert for stuff, well, those factor name differences can get you into some trouble. No common terminology and the like.

A few years later, after trying to manage all that data manually and giving it their best shot, consequences appeared. People were duplicating efforts, different plants making similar products, running duplicate studies, et cetera. They had some problems with audits. Company did some work for the aviation industry, so there were some external audits. They had problems finding data during external audits and a lot of grief.

Again, difficult searches because in the world of computers, Temp 1 is not equal to Temp_1. I'm having some trouble assimilating things together. Hey, have I ever studied this particular temperature on an extruder? Or have I ever studied this particular factor on an injection molding line? People are using different terminology, different factor names.

The consequences. First of all, when you do things like this, you tend to lose knowledge when people leave. This guy's got some data on a hard drive, or he's got it on a network drive or something, maybe people lose track of it. Any news studies that they did were made without the benefit of all the work they had done in the past. All this useful process insight becomes lost with time like years in rain.

Meanwhile, over in the C-suite, the chief executive officer and his other C-levels, absolutely verified by the accumulating cost of statistical studies doing all this work, and they didn't see the return on all of the investment. They didn't meet their expectations. Statistical methods take a high level reputational hit once again.

What do we do about it? First, preventive measure. Change your mindset. Create data and the results from your data and the models that you're building from your data like an asset, like machines or money in the bank.

Change in mindset. This data does not serve you well if it's scattered about. Establish corporate policies for data governance, storage, data analysis, historical records, so that all of these plants, and I did have mentioned earlier, these plants are scattered all over the world, different time zones and different continents and the like.

If you establish a corporate policy, you're going to get the maximum value out of your data. ABC Widgets has worked 571 days without a loss data incident. That's the approach you want to take. Meanwhile, in some industries, in the biotech industries, there's this DA thing called KASA, all about data governance and all about smoking the audits. When the auditors come in, they ask for something, man, you got it at your fingertips.

Other preventative measures. If you're going to create a JMP data file, annotate it. Use the new tables variable to write what's the goal of the project? What are all the high-level details? Use column notes and answer the question, would the purpose, methods, details, and results be clear to somebody who didn't participate in the work five years from now?

Most JMP files, the answer is no. Somebody would look at it five years from now, and they really wouldn't have a clue what was being done, what machinery you used, what measurement systems you used, who participated, and so on.

Another preventive measure that we think adds value. Also use JMP Query Builder. Bring in all this, assimilate the data as much as you can, whatever you can to assimilate data and then consider the use of this thing called a Knowledge Relationship Management system. There's a few of them out there. One of them is CoBaseKRM. Some of you may have some experience with it.

It's tuned for with the JMP, and it handles Excel data, and it requires data governance by definition. It's searchable, it's scalable. If you really want to get serious about extracting all of the gold nuggets out of your data, this wide scale single management system is a good place to start. You've got multiple locations. If you're having to do audits because that's business as usual in your industry, it's really going to help.

Summarize this section. Knowledge, data, analysis, statistical models, cause and effect knowledge. It's a company asset, but it's not treated as such. If you lose all that knowledge, you're losing profit, you're becoming less competitive instead of more competitive. In some industries, this whole thing about managing knowledge is really critical.

That's the third of five. Finding #4. SPC, it's not just for the end product anymore. What we've got here, again, what we're learning from others may have these problems, in this case, a failure to understand factors, the overall root cause in our findings.

Another case study, this is a metal coating process. Can't say too much about it, but a lot of factors, lots and lots of factors. Customer identified 21 process factors. There's a couple of responses. They were doing some stuff with metal.

What they were doing is, and this is, again, fairly common practice, they had statistical process control charts for the responses, the things that are important for the coating of the metal. They weren't really looking at the behavior of the factors. They were essentially looking at their process backwards.

If they saw a signal in the IMR chart for response, they'd go back, and they start weak in the process. That's putting the... What does it say? Cart in front of the horse, in our opinion. The underlying truth is they had essentially institutionalized hampering in their process. They have all these control points. They're not looking at them, they're just looking at the result.

There's also underlying truth. There's all this other stuff out there, the nuisance factors, raw material influences. They had different people out on the shop floor doing different things, and it was essentially systematic chaos. I like what Deming demonstrated in his funnel experiment.

Some period of time later, they were suffering persistently high scrap rates, higher costs, missed deliveries, customers getting angry, operators getting fed up with the process that they're always fighting with all the time. Cause and effect remained a mystery.

The reason that happened is because they weren't looking at the factors. They were only looking at the responses. People started asking, "Hey, we got SPC on that line. How come we got so many problems?" SPC takes a reputational hit. The problem is, is they weren't looking at the factors. Failure to understand the factors.

Here's first preventive measure. If you look at Donald Wheeler's book, Twenty Things You Need to Know, Chapter 3, absolutely fantastic discussion about controlled causes, uncontrolled causes, what causes might be hidden, and all of that really great stuff. Basically, the bottom line is if you're really suffering with problems like this, control start everything you can get your hands on.

Here's an example. Make this array. JMP is very good at this. You can make a stack of control charts as high as you want. In this case, we had 23 stacked IMR charts for factor X1, factor X2. For 13 process factors, four nuisance factors, ambient temperature, and so on. Some material properties that people thought would affect the process, a couple of responses.

This whole system is really… so that you can look at the state of your process and figure out what might happen going forward. This is not really for retroactive use, but something you'd want to do on an hour to hour, day to day basis.

Look at this array of charts. If you do that, and you get some behavior, we saw, because we're looking at the responses, when this response been here, we might have seen a change in a nuisance factor, or we might have seen a change in a material property. Very powerful technique. Just look at a whole stack of control charts. Take a little practice to get the hang of all of this and to set your control limits properly, but it can be done, and it's a very powerful technique.

The other thing is, again, to set policy and empathize SPC really in the end of the day is for the factors. You want to look at your temperature control, your pressure control. We're talking earlier about this case study with underground power cables.

What is the behavior of the super absorbent polymer coming into my process because it's part of my process, essentially. You look upstream rather than the classic example of just looking at the responses. Create workflow, learn from your mistakes, error proof, document set policies.

To summarize this section, processes are complex. There's variation sources all over the place. You've got things that are just trying to jump into your process and wreak habit. If you look at the factors using statistical process control and look everywhere again, you're going to begin to understand the positive effect behavior between the factor and the response. It also at the same time provides you with some guidance if you want to run an experiment.

Here's an L-map that I got off of the JMP website. Really cool stuff. They say here early, identify factors and responses. Easy to identify your responses. You know what you want, geometrical control or water swelling of a tape. The tough thing to do, the tough question to answer in the world of DOE is, what factors do I include?

People ask that question all the time. If you've scrolled a whole bunch of your factors and begin to learn how they behave and how they might affect your process, you're going to have the insights, some guidance, at least, to choose your factors wisely.

Last but not least, Finding #5. A good Gage R&R study does not mean your instrument is okay. You could also say a good EMP study and does not mean your instrument is okay. Found this to be a fairly common problem. The root cause is that people tend to not really understand what measurement systems analysis really is. There's a AIAG book on it that's thick, and it's not just Gage R&R.

Case study of this aluminum alloy casting process. They're doing some homogenization of the raw material, extruding some tubes or extruding cylinders, annealing, and they're concerned about hardness along the way. We're going to study the Rockwell hardness, fairly well-developed measurement, and we're going to use these hardness tests to make some important decisions for the in-process, because you have a multistep process, and for the final test.

What they did, and they did this when the customer complained, they did a Gage R&R study. They got some bits of aluminum with different known Rockwell hardnesses, and you can buy these standards. They did a Gage R&R study. Great. This is interesting.

They got these fantastic results. Wow. You look at the variance components. All the variation seems to be coming from the parts and very little coming from the gage itself. They concluded the instrument was close to perfect.

Unbeknownst to them, first of all, the instrument stable over time, and they had some bias problems they were unaware of. If you look at some of this data, instead of looking just at the bottom line results, if you look at the data in a variability chart, you can see the instrument was having a much more difficult time with the B85 and B100 parts. Look at it by operator, look at it by part, you see the same thing.

As we all know, the results of a Gage R&R are highly dependent on the part selection. If we get parts that are really significantly different from each other, you have a strong signal and whatever noise in the measurement system is going to be relatively low.

One year later, consequences, complaints persisted, had frequent returns, all these corrective action requests. If there's anything that gives me shivers, it's a corrective action request. I call it an equal opportunity burning money exercise.

That's my opinion of corrective action request because everybody spins around in circles and people get angry and all of that, and then they write up, "Oh, we're going to retrain our operator and all that." All of it really doesn't really solve the problem. The C-levels get mad. The statistical methods can take a reputational hit.

What do we do? The preventive action. Think about measurement systems analysis. It's a workflow. It is not Gage R&R. There's a whole bunch of things you really need to do to understand your instrument. You want to start with a consistency check.

Measure the same part, same person, over and over and over and over again. If you see signals in that chart, stop immediately. You've got a problem you need to fix. You need to study bias and linearity. You need to make sure your instruments are calibrated.

EMP, Gage R&R is important, but it's just part of the process. You can look at Gage performance curves that we talked about earlier. There are some methods if you have some questions about your measurement system, you think it's going to affect your final inspection decisions, you can use these things called manufacturing specs, a temporary fix to accommodate or account for or to compensate for measurement noise and your instruments, really, you need to look at them over long periods of time. Gage R&R is discrete. SVC charts are not.

Section summary, there's this misconception about MSA. People think it's Gage R&R. I've been in audits where the auditor says, "Show me your MSA," and people scurry around, and they show the Gage R&R results. Misconception.

Measurement systems could be continuous. Measurement systems analysis, measurement systems studies should be continuous, not discrete. It's not a one-off affair, and then you can make these conclusions. You set some corporate practices, make them visual, and so on. I think in terms of time-scaled behavior of your measurement systems.

That wraps it up. Here's bottom line, summarize the five case studies and some advice. First of all, learn from the trials and tribulations of others. This is what we're trying to do because we have these customers, and we're trying to help, and we're trying to learn from them. They learn from us. Everybody learns together. Great. We learn from each other. Really this idea that statistical methods sometimes to get in trouble because it's done in a vacuum. It's done with this statistical mindset.

At some point, you got to look at the business impacts, cost, raw material availability, supply chain, safety, warehousing, you name it. There's a much bigger picture in a lot of these continuous improvement efforts. Again, think in terms of workflow rather than discrete thinking. Let's not go out and run an MSA. Let's not go out and run a DOE. Let's think about this workflow. How do we use a broad range of statistical methods to improve our lives and to make our customers happy?

Consider data and the models that you derive from your data as a very, very important corporate asset. Focus on factors. Keep looking at the responses. In my days as a manufacturing engineer, I looked at the factors. There's no dial out there for yield. There's no dial out there for dimensional control. There's no dial out there for anything that's a response. The dial is on the factors, so you need to understand how well can I control the factors, and how do they affect my response?

Focus on variation, reduction, to walk away. Don't get me wrong, meeting specs important. Let me say that clearly, meeting specs is important, but it's not the end game. You have to go on beyond that and focus. Just like Taguchi told us decades ago, "Focus on reducing variation and being on target."

We believe if you do all of that, that you'll get these accumulating improvements to your process. Your operators will be happy, your suppliers will be happier because you're helping them, and they're helping you, and you're satisfying your customers. You get these accumulating improvements over time.

With that final thought, we'd like to say thank you. Juan and I, thank you for listening. Get in touch if you'd like to discuss any of these subjects, we'd be glad to help you.

Presented At Discovery Summit 2025

Presenters

Skill level

Beginner
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎07-09-2025 08:58 AM by Community Manager Community Manager | Updated on ‎10-28-2025 11:42 AM

Life on the manufacturing floor can be painful.  Measurement Systems studies can provide a false sense of security.  Sometimes, designed experiments can start off well, but end up with mediocre results.  Or, our SPC systems can overwhelm everyone with false alarms. The unfortunate result is lost time, wasted resources and rapidly diminishing colleague support.  And to make things worse, as damage to the reputation of statistical methods accumulates, the availability of resources goes from decent to dismal.

The good news is that we can avoid these unpleasant scenarios and encourage the use of statistical methods across organizations by using visual workflows, error-proofing steps and recommended techniques that were refined during a multiyear Pyzdek Institute study.  During the study, we collected insight from clients, gained understanding of the root causes of statistical method disappointment and identified effective preventive measures. The study uncovered many root causes but also revealed an unexpected epiphany: a leading cause of disappointment was using statistical methods without considering important business perspectives.

In this presentation, we'll include five failure case studies along with the steps needed to prevent recurrence. We also make a few controversial recommendations that we hope will trigger interesting, perhaps heated, discussions.  For example:

  • Never run a designed experiment as a stand-alone effort
  • Teaching MSA, SPC and DOE as separate subjects may not yield the best organizational results
  • Beware the use of Optimize Desirability
  • You may not know what you already know
  • Meeting specification is not the operational goal

By the end of the presentation, attendees will have a better understanding of:

  • Why the context of business risk and workflow error-proofing are key to organizational acceptance of statistical methods
  • How to learn from the tribulations of other manufacturing organizations
  • How to ask Scotty to beam us from The World of Statistics to The World of Business at just the right time

 

 

 

Welcome everyone to At the Corner of Statistics Road and Business Decision Boulevard. Just a quick shout out to my co-presenter, Juan Rivera. Juan and I put this together as a team, but he decided to let me go ahead and do the presentation. Just a very, very quick overview of Pyzdek Institute based in Tucson, Arizona.

We got some people stationed in different places around the world, and we teach engineering statistics, Lean Six Sigma, many clients, many industries. That's actually an important point because what we're going to do today is to share some experience that we had with some of our clients from different industries and hopefully help people learn a bit. Finally, we're a JMP partner. Very pleased to be a JMP partner, and we'll move on.

Just to put this into perspective, this presentation is actually a sequel back in 2020, JMP Discovery in Tucson, did a presentation called At the Corner of Lean Street and Statistics Road. This is a bit of a follow-on. That recording is available at jmp.com, if anybody would like to see it. Also, a couple a few years after that, in 2022, Juan and I did a presentation at the ASQ Lean Six Sigma Conference, Error-proofing for Design Experiments. Again, learning from our clients, trying to help other people learn from said mistakes.

The presentation goal? Well, yeah, we've been looking at this for quite a while. It's a multi-year look, multi-year study. What we're trying to do is answer this question, why do statistical methods go off the rails and end up with a bad reputation? We see that ourselves.

When I work full-time, the boss has got mad because I did something out on the plant floor. How do we learn from those mistakes and come up with some preventive measures? That's the whole idea of we're trying to accomplish today.

Very simple format. I got five main findings to share with everybody. Each one has a case study. All we're going to talk about is what the customer did, the consequences that they suffered from what they did, and how we were able to help them resolve the issues and prevent things from going off the rails a second time.

Here it is, Finding #1. Summarize. It's all about the business, not the statistics. A common theme that we saw. The root cause of the problem, we can summarize as what we've got here is failure to see the big picture.

Here's the case study. First case study. This company makes tapes that are used in energy cable for water blocking purposes. That catch up on the top, the photograph on the top, really cool. In the presence of water, everything swells up, and it uses water to block water. Really cool stuff. It has a super-absorbent polymer blended with nonwovens and binders and the like.

The goal was to optimize a product for saltwater performance. In fresh water, these tapes swell very nicely and not so much in saltwater. They had this technical challenge for somebody who wanted to bury some power cables in an area where there was some groundwater, some brackish groundwater.

They had controllable factors. They could control the weight of the nonwoven substrate. They had some control over the superabsorber particle size. It's available in different sizes. They had different bonding agents to stick everything together, how much bonding agent they put in, and they also had to add a corrosion inhibitor so that water gets in there. The corrosion inhibitor prevents erosion.

A fairly simple, straightforward design, really important product because if water gets in cable, all hell breaks loose. Their response was to minimize the ingress of a certain salinity of saltwater into a cable, according to this sketch. Grip away the teeth, expose the water blocking tapes, apply some pressure using an elevated water tank, and see how far the water goes into the cable. Of course, they don't want it to go very far.

What they did, ran a design experiment. Great. If you look at the prediction profiler, have the substrate weight, particle size, bonding amount, the inhibitor amount, and the bonding type, the type of glue they used to stick everything together.

They ran an experiment, hit maximized desirability, and they chose Bonding Agent 2 because they got less water penetration with it. They modified the design of the tape, and they moved on to the next project. A routine case study, and a lot of people do something like this.

However, a couple of months later, they got hit by three bolts from the blue. First of all, the particle size of the superabsorber was drifting around, and they were unaware of it. There was some special cause variation in the weight of the substrate nonwoven caused some problems.

To make matters worse, safety manager got notification that certain elements, certain pompons, no longer acceptable in the European market. Safety manager issued orders to immediately siege the use of Bonding Agent 2, something they had just chosen to optimize the product design for saltwater performance. Wow.

There were consequences. Panic switch. I got to switch from Bonding Agent 2 to Bonding Agent 1. They started an incoming material test. People were blaming each other. All heck broke loose. Here's the point, it's the statistical methods and the design of experiment that took the reputational hit. Hey, that DOE didn't work. Why is that?

Just as a side note, just to share some other case studies, there are some other unintended consequence that we've learned about by using optimized desirability. We've seen companies run a DOE, optimize desirability, but they ended up with a large increase in energy costs, or they made the work area dangerously hot for operators, increased machine breakdown in maintenance costs. Sometimes their customer, the person they're selling their product taken by surprise, not just noises from machinery, and so on.

The first preventive measure that we came up with, and again, we were trying to help this customer and use what we knew about from our experience. First preventive measure, we recommend to use maximized desirability very carefully. We've seen people get in trouble with it.

In fact, we would recommend it only be used if you have a large number of responses, a lot of complex trade-offs between financial responses, technical responses, commercial responses, and so on. I urge a little bit of caution.

Second preventive measure is to acknowledge that design of experiments is a discrete exercise. Design of experiments is one of the greatest methods invented for improving processes and understanding processes and optimizing process, but it is a discrete exercise.

Had they used SPC control charts, either from their supplier regarding particle size and all of that, they would have avoided a whole lot of pain. We believe that DOE and SPC make a really great team, and they should be paired together whenever possible.

The third preventive measure is to think about the workflow policy. This company had these engineers doing all the right things, doing the experiment and all of that. But there were a few misses. They should have been talking to the supply chain people and so on. What we recommend is to set some policies, corporate policies. You're going to run an experiment. Here's how you do it. We came up with this idea, let's break it down into phases.

Because at each phase, you probably want to involve different people. For example, this one company said, "We'll use these three phases. We're going to work, do some work, and we're going to evaluate the data we collect, and then we're going to make some business decisions." They ended up creating some visual standards, including error proofing steps.

They started with a small group of people. You got the people out there designing an experiment, collecting the data, so on, so forth. You don't want a big group of people involved at that point. You don't want supply chain involved at that point, and so on.

They then go to the second phase. This is where you start to bring in some additional people, supply chain, safety people, operators, supervisors, and others to evaluate the data. Then to this point is you really want to over-collaborate. This is how we evaluate the data we just collected. We do certain steps. If it's good, we do one thing. If it's not good, we do something else, all to set some policy.

Then in the final phase, this is where you bring in everybody, including sea levels and the like, involve all your stakeholders to make the final decision. If they'd done that originally, they probably would have avoided all that pain.

Fourth preventive measure we recommend, some companies say, "Hey, we'll teach everybody DOE." Probably not the best idea. Each DOE is a stand-alone subject, but instead, teach it as part of statistical thinking. How do we get people to think statistically, not just run design experiments, but look at everything.

Look at raw material performance, look at process centering studies, make sure our instruments are okay. Do I have any assignable cause variation in my process and teach all of these skills, reliability analysis and so on? It's the fourth preventive measure we recommend.

Here's a section summary of Case Study 1. After collecting the response data, it gets study to beam you from the World of Statistics to the World of Business, our recommendation. There's a picture of Mr. Spock the first time he saw the JMP Graph Builder. It's like me. Hey, wow, that's the greatest thing in the world. All kidding aside, that's what we recommend to avoid problems like that one that was described.

Here's Finding #2. To summarize, meeting spec is not the end game. We see a lot of customers chasing their specs, and they're getting in trouble because all they're doing is focusing on their specs. The root cause is, to summarize, in my words, failure to understand Taguchi. We'll talk about that in a little bit of detail.

I just wanted to bring up this quote here because I think it speaks very, very loudly. Everybody from the management suite all the way down to the shop floor should think about the quality paradox in Donald Wheeler's word from his book, Understanding Statistical Process Control. Thus, we come to the paradox. As long as the management has conformance to specifications as its goal, it will be unable to reach that goal. If the actions of management signal that meeting specifications is satisfactory, the product will invariably fall short.

Great words. Our observations, what we learn from our customers, corroborate this quality paradox, as stated by Donald Wheeler. Here's a case study. I had a CNC process, the machining of polycarbonate parts. Company got a large order for a fairly complex part, and they had the usual machinists types of responses. The parts got to be the right geometry, right thickness, the angle's got to be correct, surface finishes have to be correct, and so on.

What they did is when they got this order, they said, "Well, we've got this control over our process. We've got these specifications." They dug up the results of an old process capability analysis. Hey, look, everybody, way back when we had a CPK of 1.357. It's based on 30 samples. Little did they know they probably should have done an SPC pre-study to make sure their process was stable and predictable.

They did not look at the confidence interval of the CPK. That can always be a surprise. Afterwards, they didn't look at the process going forward over time. Again, you get this discrete project. Process capability analysis is discrete, and they then proceeded.

They concluded, based on the data that they uncovered, that all was well. Yeah, look, we've got capability to meet the specs. We saw the CPK greater than 1.3 is some reasonable assurance of success. Understandably so. People talk about that all the time.

They took the order, made the parts, chipped the product, got it out the door, so they could get to the next order. Understandable. However, there's some underlying truth. First of all, the parts that they collected for their legacy PCA study was collected for convenience and really wasn't the good representation of the overall process.

Now, in the plastic machining world, you don't have to really worry as much about cool wear as in the metals machining world. There's some things that go on out there that change with time. They also didn't realize that the lower confidence interval of their CPK index was 0.89. Again, process capability is discrete. There's no timescale.

What they learned a year ago didn't really tell them a whole lot about what the process looked like when they got the order. Little did they know their process was not in statistical control. That's one of the prerequisites is PCA. If your process is in control, these indices really don't mean much.

Another thing is the specifications that they were using, they were arbitrary. They came from a tolerance block, and those specifications didn't really reflect the ability to satisfy the customer. This is a very common occurrence based on our experience with customers.

A few weeks later, the consequence is Customers couldn't assemble some of the parts. They had this machine plastic parts. Part of an assembly, they couldn't assemble it properly. The dispute started over some dimensions and so on. The supplier, the person making the parts, really had no proof of either measurement integrity or process control. They said, "Well, dear Mr. Customer, our CPK index is 1.357." The customer, the buyer, says, "what else you got? That doesn't mean anything to me. What else you got?" They were in trouble. Consequences? High rates or rejections.

Again, the point of this presentation is, it's always the statistical method that takes the hit. Things go wrong, decisions are made, and the statistical methods are sounds. Calculations of a CPK index are fairly straightforward. All of that sound is not under dispute, but it's the method that takes the hit. The reputation of people in production, management, people in sales, and all of that is another failure of statistics.

What do you do about it? You got this problem. The first preventative measure is to understand Taguchi. The Taguchi Loss Function, in the opinion of the Pyzdek Institute, is one of the most important concepts for anybody to master. Very simply, it says if you're on target in the middle, right on target, you minimize your losses.

As soon as you start drifting off target, off the center of your distribution, somebody starts to lose money. That really summarizes what these folks probably should have done.

In JMP, you can visualize it, Taguchi Loss Function. We recommend to companies to do something like this. Take one of their products. In this case, we've got some dimension, we've got targets, there's a constant that you have to adjust until you get the right financial losses, you do some arithmetic, and you can create a table that looks like a graph using the Graph Builder. It's essentially a scatter plot to show people to say, "Look, if our part is 18 millimeters, we get to keep all the money." In nice simple terms like that, terms that people can understand.

By the way, you don't want to call it the Taguchi Loss Function. You don't want to be talking about that with top floor people. It's concept. That's important. Then you can say something like, "Look, if a part is 17.93, it's in spec." There's a lower spec limit of 17.9, but we're losing $108 per part, according to the Taguchi Loss Function.

People understand that. People see that. The point here is to try to get people to stop thinking about meeting spec. I'm in spec, I'm out of spec, and start thinking about how far you are from the target.

A lot of really smart people have been saying that for many, many years, but that message doesn't necessarily get through to everybody. Again, I won't belabor the issue. The other preventive measure is to set some policies. If you're going to do process capability, what do you do? Well, okay, first you got to make sure the process is in control. If not, you find and fix the problem.

Then you ask yourself, are these specs really meaningful? Are they just some things from drawing someplace, a little table on a drawing? Then you go through this whole thought process. Our recommendation is to include really a lot of process behavior cards along the way.

Again, tell it like it is, another preventive measure. This is a drawing. Everyone has seen this. You got your lower spec, you got your upper spec, you got your target. Wheeler calls this the Zone of Benign Neglect. I think he worded it very carefully. I call it the Zone of False Perfection. When you're outside the spec, you got the lower zone of chaos and doom and the upper zone of chaos and doom.

When you think about it, that's the world of meeting spec. You're in one of these two states, you're doing your happy dance, or you're running around like crazy trying to fix the problem. We find that this thought process, this mindset, gets people into trouble.

Other things to consider for companies, JMP has this add-in called the Gage Performance Curve Generator or something like that. Great stuff. Fantastic because it shows something about why you want to be on target. Because the fact is, as soon as you start getting close to your upper or lower spec, measurement noise becomes a deciding factor in customer satisfaction.

As soon as you get close to the lower spec, for example, there's some measurement noise. You may be judging parts that are in spec as out of spec and vice versa. This JMP add-in is fantastic, and we recommend our customers use it, learn it, and learn from it.

To summarize the section, Case Study 2, variation reduction is the goal. However, process capability analysis, process capability indices focus attention and resources on meeting spec, and we see clients getting into trouble.

That's two out of five. Finding #3. You may not know what you already know. To summarize, Finding #3. What we got here is failure to fully monetize our data. That's what we found as the root cause of these problems.

Here's a case study, another plastic processing company. They have worldwide locations, extrusion, injection molding, rotomolding, blow molding, all plastic products, hundreds of machines, a big, big company. Their problem with statistical methods in general is the sheer volume of things that they were doing.

They had independent statistical studies at each plant, different processes. Data storage was really not very organized. They had data files kept on local network hard drives, PC hard drives, thumb drives, cloud drives, you name it. The data was scattered everywhere.

They said, "Okay, well, we realized this problem." They realized they had a problem. "We got all this data. How do we organize?" They tried some manual methods. They tried to simulate data into certain places.

There was this long, long list, supplier data, current state data about whether their processes are sent or not, measurement systems analysis data, you name it. They had it, lots and lots of data. The underlying truth was just too much for any manual approach. "Everybody send your files here. Let's keep the files on this network drive or on this cloud drive or something."

They really weren't maximizing the value from all the data, spending a lot of money collecting the data. They didn't maximize the value from it. There was no data governance either. If one person calls a factor by one name and another person calls a factor by another name, and you're trying to assert for stuff, well, those factor name differences can get you into some trouble. No common terminology and the like.

A few years later, after trying to manage all that data manually and giving it their best shot, consequences appeared. People were duplicating efforts, different plants making similar products, running duplicate studies, et cetera. They had some problems with audits. Company did some work for the aviation industry, so there were some external audits. They had problems finding data during external audits and a lot of grief.

Again, difficult searches because in the world of computers, Temp 1 is not equal to Temp_1. I'm having some trouble assimilating things together. Hey, have I ever studied this particular temperature on an extruder? Or have I ever studied this particular factor on an injection molding line? People are using different terminology, different factor names.

The consequences. First of all, when you do things like this, you tend to lose knowledge when people leave. This guy's got some data on a hard drive, or he's got it on a network drive or something, maybe people lose track of it. Any news studies that they did were made without the benefit of all the work they had done in the past. All this useful process insight becomes lost with time like years in rain.

Meanwhile, over in the C-suite, the chief executive officer and his other C-levels, absolutely verified by the accumulating cost of statistical studies doing all this work, and they didn't see the return on all of the investment. They didn't meet their expectations. Statistical methods take a high level reputational hit once again.

What do we do about it? First, preventive measure. Change your mindset. Create data and the results from your data and the models that you're building from your data like an asset, like machines or money in the bank.

Change in mindset. This data does not serve you well if it's scattered about. Establish corporate policies for data governance, storage, data analysis, historical records, so that all of these plants, and I did have mentioned earlier, these plants are scattered all over the world, different time zones and different continents and the like.

If you establish a corporate policy, you're going to get the maximum value out of your data. ABC Widgets has worked 571 days without a loss data incident. That's the approach you want to take. Meanwhile, in some industries, in the biotech industries, there's this DA thing called KASA, all about data governance and all about smoking the audits. When the auditors come in, they ask for something, man, you got it at your fingertips.

Other preventative measures. If you're going to create a JMP data file, annotate it. Use the new tables variable to write what's the goal of the project? What are all the high-level details? Use column notes and answer the question, would the purpose, methods, details, and results be clear to somebody who didn't participate in the work five years from now?

Most JMP files, the answer is no. Somebody would look at it five years from now, and they really wouldn't have a clue what was being done, what machinery you used, what measurement systems you used, who participated, and so on.

Another preventive measure that we think adds value. Also use JMP Query Builder. Bring in all this, assimilate the data as much as you can, whatever you can to assimilate data and then consider the use of this thing called a Knowledge Relationship Management system. There's a few of them out there. One of them is CoBaseKRM. Some of you may have some experience with it.

It's tuned for with the JMP, and it handles Excel data, and it requires data governance by definition. It's searchable, it's scalable. If you really want to get serious about extracting all of the gold nuggets out of your data, this wide scale single management system is a good place to start. You've got multiple locations. If you're having to do audits because that's business as usual in your industry, it's really going to help.

Summarize this section. Knowledge, data, analysis, statistical models, cause and effect knowledge. It's a company asset, but it's not treated as such. If you lose all that knowledge, you're losing profit, you're becoming less competitive instead of more competitive. In some industries, this whole thing about managing knowledge is really critical.

That's the third of five. Finding #4. SPC, it's not just for the end product anymore. What we've got here, again, what we're learning from others may have these problems, in this case, a failure to understand factors, the overall root cause in our findings.

Another case study, this is a metal coating process. Can't say too much about it, but a lot of factors, lots and lots of factors. Customer identified 21 process factors. There's a couple of responses. They were doing some stuff with metal.

What they were doing is, and this is, again, fairly common practice, they had statistical process control charts for the responses, the things that are important for the coating of the metal. They weren't really looking at the behavior of the factors. They were essentially looking at their process backwards.

If they saw a signal in the IMR chart for response, they'd go back, and they start weak in the process. That's putting the... What does it say? Cart in front of the horse, in our opinion. The underlying truth is they had essentially institutionalized hampering in their process. They have all these control points. They're not looking at them, they're just looking at the result.

There's also underlying truth. There's all this other stuff out there, the nuisance factors, raw material influences. They had different people out on the shop floor doing different things, and it was essentially systematic chaos. I like what Deming demonstrated in his funnel experiment.

Some period of time later, they were suffering persistently high scrap rates, higher costs, missed deliveries, customers getting angry, operators getting fed up with the process that they're always fighting with all the time. Cause and effect remained a mystery.

The reason that happened is because they weren't looking at the factors. They were only looking at the responses. People started asking, "Hey, we got SPC on that line. How come we got so many problems?" SPC takes a reputational hit. The problem is, is they weren't looking at the factors. Failure to understand the factors.

Here's first preventive measure. If you look at Donald Wheeler's book, Twenty Things You Need to Know, Chapter 3, absolutely fantastic discussion about controlled causes, uncontrolled causes, what causes might be hidden, and all of that really great stuff. Basically, the bottom line is if you're really suffering with problems like this, control start everything you can get your hands on.

Here's an example. Make this array. JMP is very good at this. You can make a stack of control charts as high as you want. In this case, we had 23 stacked IMR charts for factor X1, factor X2. For 13 process factors, four nuisance factors, ambient temperature, and so on. Some material properties that people thought would affect the process, a couple of responses.

This whole system is really… so that you can look at the state of your process and figure out what might happen going forward. This is not really for retroactive use, but something you'd want to do on an hour to hour, day to day basis.

Look at this array of charts. If you do that, and you get some behavior, we saw, because we're looking at the responses, when this response been here, we might have seen a change in a nuisance factor, or we might have seen a change in a material property. Very powerful technique. Just look at a whole stack of control charts. Take a little practice to get the hang of all of this and to set your control limits properly, but it can be done, and it's a very powerful technique.

The other thing is, again, to set policy and empathize SPC really in the end of the day is for the factors. You want to look at your temperature control, your pressure control. We're talking earlier about this case study with underground power cables.

What is the behavior of the super absorbent polymer coming into my process because it's part of my process, essentially. You look upstream rather than the classic example of just looking at the responses. Create workflow, learn from your mistakes, error proof, document set policies.

To summarize this section, processes are complex. There's variation sources all over the place. You've got things that are just trying to jump into your process and wreak habit. If you look at the factors using statistical process control and look everywhere again, you're going to begin to understand the positive effect behavior between the factor and the response. It also at the same time provides you with some guidance if you want to run an experiment.

Here's an L-map that I got off of the JMP website. Really cool stuff. They say here early, identify factors and responses. Easy to identify your responses. You know what you want, geometrical control or water swelling of a tape. The tough thing to do, the tough question to answer in the world of DOE is, what factors do I include?

People ask that question all the time. If you've scrolled a whole bunch of your factors and begin to learn how they behave and how they might affect your process, you're going to have the insights, some guidance, at least, to choose your factors wisely.

Last but not least, Finding #5. A good Gage R&R study does not mean your instrument is okay. You could also say a good EMP study and does not mean your instrument is okay. Found this to be a fairly common problem. The root cause is that people tend to not really understand what measurement systems analysis really is. There's a AIAG book on it that's thick, and it's not just Gage R&R.

Case study of this aluminum alloy casting process. They're doing some homogenization of the raw material, extruding some tubes or extruding cylinders, annealing, and they're concerned about hardness along the way. We're going to study the Rockwell hardness, fairly well-developed measurement, and we're going to use these hardness tests to make some important decisions for the in-process, because you have a multistep process, and for the final test.

What they did, and they did this when the customer complained, they did a Gage R&R study. They got some bits of aluminum with different known Rockwell hardnesses, and you can buy these standards. They did a Gage R&R study. Great. This is interesting.

They got these fantastic results. Wow. You look at the variance components. All the variation seems to be coming from the parts and very little coming from the gage itself. They concluded the instrument was close to perfect.

Unbeknownst to them, first of all, the instrument stable over time, and they had some bias problems they were unaware of. If you look at some of this data, instead of looking just at the bottom line results, if you look at the data in a variability chart, you can see the instrument was having a much more difficult time with the B85 and B100 parts. Look at it by operator, look at it by part, you see the same thing.

As we all know, the results of a Gage R&R are highly dependent on the part selection. If we get parts that are really significantly different from each other, you have a strong signal and whatever noise in the measurement system is going to be relatively low.

One year later, consequences, complaints persisted, had frequent returns, all these corrective action requests. If there's anything that gives me shivers, it's a corrective action request. I call it an equal opportunity burning money exercise.

That's my opinion of corrective action request because everybody spins around in circles and people get angry and all of that, and then they write up, "Oh, we're going to retrain our operator and all that." All of it really doesn't really solve the problem. The C-levels get mad. The statistical methods can take a reputational hit.

What do we do? The preventive action. Think about measurement systems analysis. It's a workflow. It is not Gage R&R. There's a whole bunch of things you really need to do to understand your instrument. You want to start with a consistency check.

Measure the same part, same person, over and over and over and over again. If you see signals in that chart, stop immediately. You've got a problem you need to fix. You need to study bias and linearity. You need to make sure your instruments are calibrated.

EMP, Gage R&R is important, but it's just part of the process. You can look at Gage performance curves that we talked about earlier. There are some methods if you have some questions about your measurement system, you think it's going to affect your final inspection decisions, you can use these things called manufacturing specs, a temporary fix to accommodate or account for or to compensate for measurement noise and your instruments, really, you need to look at them over long periods of time. Gage R&R is discrete. SVC charts are not.

Section summary, there's this misconception about MSA. People think it's Gage R&R. I've been in audits where the auditor says, "Show me your MSA," and people scurry around, and they show the Gage R&R results. Misconception.

Measurement systems could be continuous. Measurement systems analysis, measurement systems studies should be continuous, not discrete. It's not a one-off affair, and then you can make these conclusions. You set some corporate practices, make them visual, and so on. I think in terms of time-scaled behavior of your measurement systems.

That wraps it up. Here's bottom line, summarize the five case studies and some advice. First of all, learn from the trials and tribulations of others. This is what we're trying to do because we have these customers, and we're trying to help, and we're trying to learn from them. They learn from us. Everybody learns together. Great. We learn from each other. Really this idea that statistical methods sometimes to get in trouble because it's done in a vacuum. It's done with this statistical mindset.

At some point, you got to look at the business impacts, cost, raw material availability, supply chain, safety, warehousing, you name it. There's a much bigger picture in a lot of these continuous improvement efforts. Again, think in terms of workflow rather than discrete thinking. Let's not go out and run an MSA. Let's not go out and run a DOE. Let's think about this workflow. How do we use a broad range of statistical methods to improve our lives and to make our customers happy?

Consider data and the models that you derive from your data as a very, very important corporate asset. Focus on factors. Keep looking at the responses. In my days as a manufacturing engineer, I looked at the factors. There's no dial out there for yield. There's no dial out there for dimensional control. There's no dial out there for anything that's a response. The dial is on the factors, so you need to understand how well can I control the factors, and how do they affect my response?

Focus on variation, reduction, to walk away. Don't get me wrong, meeting specs important. Let me say that clearly, meeting specs is important, but it's not the end game. You have to go on beyond that and focus. Just like Taguchi told us decades ago, "Focus on reducing variation and being on target."

We believe if you do all of that, that you'll get these accumulating improvements to your process. Your operators will be happy, your suppliers will be happier because you're helping them, and they're helping you, and you're satisfying your customers. You get these accumulating improvements over time.

With that final thought, we'd like to say thank you. Juan and I, thank you for listening. Get in touch if you'd like to discuss any of these subjects, we'd be glad to help you.



0 Kudos