Improving gauge throughput for production demand while maintaining low measurement variation can be challenging. Often, throughput and measurement variation are competing variables and understanding the relationship between them can be difficult without utilizing a design of experiment (DOE).
In this case study, we show how JMP’s Custom DOE platform can be coupled with a measurement systems analysis (gauge repeatability and reproducibility) to increase throughput while maintaining the accuracy and precision of the gauge. Utilizing JMP's DOE platform, the development time was reduced by an estimated 50% as opposed to a one-variable-at-a-time approach. We demonstrate the physical constraints of the gauge and how these constraints are added into the Custom DOE. We then model the DOE results including optimizing the design space and experimentally validating the 60% increase to the gauge throughput.
Hey guys. Good morning. My name is Michael Cantwell. I'm a development engineer with Peak Nano. Myself and Jake Miller have been taking you through a case study that we've worked on, on how to Improve Your Gauge Throughput by Combining a Custom DOE with a Gauge R&R.
Before we get into that case study, I do want to go over a little bit of Peak Nano, so you can see a little bit of the products and some of the industries that we serve. We're broken up into two different business units. Jake and myself both work in the optic side, so the optics business unit, we focus on design and manufacturing lenses, as well as design assembly of full optical systems.
This means like binoculars, night vision goggles, it could be fire control systems for a mortar, or it could be for an actual rifle. This business unit is really focused a lot on defense applications, defense contracts, and there are some commercial applications as well.
Then the other business unit, films business unit, starts with the base process technology that can be leveraged to create very unique product properties. One of the products that they're actually scaling up right now is capacitor film. Capacitor film by tailing properties. Their customers have seen this as very valuable. Some of the applications they're getting involved with is actually fusion reactors, so large energy demand over a very short time period for that startup, aircraft launch systems is another one that capacitors are very needed or anywhere that there's a huge energy demand over a very short time period.
Other applications for this base process technology includes food packaging, even recycling, films to get the same bare properties or mechanical properties. Usually when you use recycling, you can't hit those same properties. That base process technology allows them to meet the same properties while using more recycling, so it's been very advantageous for the films business unit.
Again, for this case study that we're talking about today, Jake and myself come from the optics business unit, and we're going to be talking about really the lens manufacturing process and a gauge specifically in that area.
Talking about through the agenda, so we can see just how [inaudible 00:02:17] what we're going to be doing today. We'll start with the process flow schematic to really help identify what the problem is. We'll be going through the scale-up challenge, and how it became more of a problem as we're growing larger and larger and making more and more parts.
We'll go through the DOE, the design of experiment, how we set it up, why we set it that way. We'll go through a little bit of the Gauge R&R, and the sample selection. We'll talk through the results, a little bit of discussion, and end with some conclusions.
Now, looking at the process flow schematic to help us identify what the problem is that we're talking about today, in the lens manufacturing process, we've broken that up into, you know, just trying to keep it simple. Four process areas. The optical design, where the designer will define what the radius of curvature of the lens needs to be for a specific surface. Then the high precision lathe process. This is the process that Jake works very heavily in, is where we actually will manufacture and process that radius of curvature.
Then, because we don't live in an ideal world, we have to measure what that surface actually is. We use a lens surface gauge. This gauge will actually characterize what that surface looks like. This gauge is this gauge that we're talking about today, so this case study is really all around this surface gauge. It outputs a lot of information, but we're going to simply characterize stay for this case study, the radius of curvature.
Then quality control. The quality control, we've set spec limits are somewhere around plus or minus 10 microns, so we know if we're within plus or minus 10 microns, we're passing specifications. If not, we do have the opportunity to rework these lenses, and it is a standard process to rework these lenses. Each surface finish pass can be just a few hundred nanometers, so these can be reworked two or three times per lens.
The next thing I want to identify is the bottleneck in this process really became the surface gauge. There's about four lays for every surface gauge, and then each part itself takes five to seven minutes a piece to measure. The surface gauge was a bottleneck in our process. Being able to capture adequate good information on the actual values is paramount for us in the development and even as we go through and scale up.
Talking about scale up now. Anytime you start a new product, especially in hardware, you start with very few parts to begin with, and you do development, and then as you grow, obviously more and more parts, you start making, and quality control and quality assurance must scale with the business. The gauge throughput really started coming in and more larger constraint for us in that bottleneck, so the measurement time and availability of operators became a larger issue.
Given the baseline for what this lens service gauge looked like, again, measurement time anywhere between six to seven minutes apart, and the gauge variation about 0.02%, so we were very satisfied with the gauge variation. I don't think anyone can complain about 0.02% and a gauge, but the measurement time was an issue for us. The question became, how do we effectively decrease the measurement time without impacting the gauge error or variation?
That's where the DOE or design of experiment came in. Our approach to improve this was to utilize a DOE, and because we're talking about a gauge, utilize also a Gauge R&R. I'm going to start first with a DOE, and then we'll talk about the Gauge R&R.
I'm sure if you guys have been around JMP much that you have heard what a DOE is, but I really want to use this graph down here on the bottom to help advocate for why you should take a DOE approach opposed to a one variable at a time approach. A one variable at a time approach, or OVAT, you can see, the fallacy or why you shouldn't use a one variable at a time approach down here in the bottom left.
A lot of times, when I'm explaining this to new engineers, or maybe it's other coworkers, this graph has been very helpful to show why you can get to a false optimum. A lot of times you'll take one variable, change it, vary it, find what the optimum is for that variable, and then you'll say, "Well, we've optimized this variable, now we need to optimize the second variable," and you start changing that variable at the optimized first condition. Then you say, "Well, this is the optimized over both variables." But you can see this is actually a false optimum.
When you take a factor screening approach or response screening approach, both within a DOE framework, design experiment framework, you can get to a better, or really, I should say, closer to the more optimal global optimum. The other big benefit to a DOE approach is you usually end up running less experiments, and you get a better answer, so it's a win-win situation to take a DOE approach.
In this DOE approach, sometimes you have physical constraints in your design space. This will be included today. We'll talk about how to add custom constraints inside of JMP to your design matrix, and then lastly, the Gauge R&R. We're talking about modifications to gauge, so anytime we talk about modifications to gauge, we need to consider how is this going to impact the Gauge R&R or the gauge variation. Gauge R&R, very simply, it's utilized to separate out sources of variation, whether it comes from the gauge or operator or the process itself.
Then another just high level view of what we're going to be doing today throughout this case study, and really the workflow we're utilizing in JMP. We're going to start with the DOE setting that up, looking at some of the custom constraints, min-max limits. Then we'll get through the gauge, we'll talk about the Gauge R&R, why we chose certain samples to represent our production, and then we'll go and look at actually some pre-recorded videos in JMP of the data analysis.
We'll go through how to build a model for our data, looking at the maximizing desirability or choosing the optimal gauge setting conditions. We'll use the simulator to better visualize the space, really like the simulator to test our model, to visualize the model, and then finally, verification. Anytime we build a model, we always need to do some verification on it, and we did this experimentally.
Next, I'm going to turn it over to Jake to walk us through the DOE and the Gauge R&R setup.
All right. Hi, everybody. To really get into this here, we have to first understand what we're trying to get out of it, and then understand what kind of input variables that we have to be able to turn, to try and drive towards that conclusion here. First, the responses that we're trying to get here are, as Michael mentioned, minimizing the gauge variation, which, of course, we were fine with where it began, and minimizing the measurement time here.
In the upper left-hand corner here, we have those two responses within the, the JMP dialog box, and then next to that are the factors here. We have AP, RP, and Rotational Velocity. These are all continuous variables, and these are the machine input variables, I should say, that we can modulate to try to get a better response. On the next slide, we'll actually go through what these are and begin to understand what the ramifications of them are, what their physical impact, and what all their limitations are.
If we go to the next slide here, we'll start getting into the method in which this gauge actually qualifies the surface of these lenses. We have first a series of definitions here of our input variables. We have the Azimuthal Pitch as AP. The Radial Pitch is RP, and the Rotational Velocity is Omega.
Starting at the side view here, we can see that the test piece is rotated underneath of an optical sensor. The optical sensor moves across the part as the part is rotating, and so that takes us along to the top view here. Now, you'll see that there's a number of data points that are outlined in pink dots here on this top view.
Azimuthal Pitch is the distance along this spiral measurement path between data points, and the Radial Pitch is the distance in between the spiral arms of this path, and, of course, this path that we're looking at here is the line that's drawn across the surface of the part as the optical sensor is traversing the part and the part is rotating underneath.
There's some kind of insights you could take away from that. Well, to maximize, or rather minimize the measurement time, we probably want to maximize the Radial Pitch here, because if we're taking more, a greater distance in between measurements up to a certain point, you'd assume then that the measurement time would be reduced. But the combination of the Azimuthal Pitch and the Radial Pitch give us the density of the data that's collected across the surface here. These RP, AP, and Omega, these form our machine constraints along with the traversal rate.
Going on to the next slide now. We'll take a look at what the actual limiting factors are here. Our main DOE constraint that we came to is that this gauge has a physical limit on how quickly the optical sensor can traverse across the test piece as it's measuring. We have here with equation one, an inequality with the number of 0.34 mm/second that is just the limit. The optical gauge can traverse the surface no faster.
Then, due to the measurement principle that we went over on the previous slide and the spiral data path, it forms this insight that, well, for every time that this test piece is rotated, we now have a relationship between the Radial Pitch and the rotation because of the spiral arms that are generated. That leads us to equation two. However, this equation here is nonlinear, and so we can't actually use it within the JMP constraint. You see here on the right, we have defined factor constraints within JMP. However, we can't actually use this yet because it is in a linear constraint, and so we'll move on to the next slide and show you the effort that was put into linearize this.
We add the first two equations together, that gives us equation three. Then if we simply redefine Radial Pitch to one over Radial Pitch within this equation, this yields equation four, and then this is now the linearized equation that we need to be able to enter into JMP, so we'll move on to the next slide now.
You'll see here we have still the same responses that we had before, but with Radial Pitch added on, and we haven't really selected any goal with this, because while our insight may lead us to believe that maximizing this value would probably do at least one of these things, which would be to minimize the time of measurement, we don't really know what it would do to the gauge variation, so we have to see what the results of the DOE would tell us that we should do.
Then we have the factors to the right here, AP, and then the redefined one over RP, and then Rotational Velocity, and then below that, we have the linear constraints which we can now actually enter into JMP, and we'll be able to see how this applies towards generating the DOE methods that we have to collect the data within. We'll go on to the next slide now.
We realized through this investigation that Azimuthal Pitch and Radial Pitch don't actually have any interaction. You know, Azimuthal Pitch comes along for the ride for free. Within the limitations of the digital acquisition system that the gauge uses, you can collect essentially an infinite number of data points along each spiral arm. The real difficulty is how close are those spiral arms to one another? How fast can this thing traverse across a surface? How fast can the part rotate? All of those physical constraints are really what we're interested in.
Azimuthal Pitch doesn't really affect time, and it doesn't seem to really affect the gauge variation that much. We knew this from experience, and like a few simple tests that we ran, nothing to do with this DOE, and so we left that interaction out of the DOE here. Then you can see, we selected the response surface model here, and it gives us all of the effects that we want to be able to test for with the factors that we have entered into JMP. To be able to actually fully define this response surface, it's telling us that we need a minimum of seven different methods to define the space, if you will. We'll go along to the next slide here.
We have two really important things here. On the left we have the table, and this outlines the seven methods which I previously mentioned. Then, probably much more useful than the table was the graphic representation of it that's next to it, the DOE test matrix. You see here on the Y-axis we have Radial Pitch, and on the X-axis we have Rotational Velocity.
Azimuthal pitch is obviously left out because it isn't that significant of an impact, and so you see here we have the mins and maxes that were defined by the gauge in blue. Obviously, you can't have a Radial Pitch that's below zero. That doesn't make any sense, and of course, there is a maximum value to the Radial Pitch.
Same thing goes for the Rotational Velocity. The interesting thing here is, if you look at the red constraint line, this is the actual linearized constraint that we defined earlier, which is to do with the relationship between the Radial Pitch and the Rotational Velocity. Once we define this box here, this is the space that we want to be able to model a response surface within for this gauge and its performance.
You'll see that the measurement methods are outlining the edges of the DOE test matrix here. Once we have all of this data, we'll be able to statistically define any arbitrary state within this graphically represented space here, so that's really what we're doing here. That's as far as we need to go for the method. Now, for the Gauge R&R, obviously, we need machine repeatability. We want to see and verify that this is not changing with any of these methods, or if it is, then we could say, well, there's more risk with certain methods over others.
Each of these methods was repeated three times, and now we're going to talk a little bit about the sample selection and why, what we did there to mitigate any potential risk that would come with this, so we'll go along to the next slide now.
The sample selection, we wanted whatever method that we found to be generalized across any potential sample that we could make, or in the future now, whenever, because there is a risk that if we would have only tested one sample, perhaps this gauge doesn't actually perform equivalently across any potential sample that we may want to measure, and so we could get a global optimum for what exactly we're making right now, but it wouldn't be future-proof then it wouldn't be generalized so that we could use it.
For the purpose of this, we stuck with just the radius of curvature. Of course, there were a few more things that went into it. But you can see here that we have a range of both convex and concave surfaces here. We have the samples listed out, and of course, there were three measurements per method, so that we could actually generate the Gauge R&R data from this DOE.
We go along to the next slide now. All right, we're at the results part, so I'm going to hand it back to Michael, and he's going to finish this out for you guys.
Thank you, Jake. Bring us back to our workflow just to review this again. Jake just took us through how we set up the DOE, the Gauge R&R, and how we chose certain samples, and then we're skipping, of course, the data collection. But before we get into the data's analysis, I've got one or two slides here on what we're going to be reviewing from the results before we actually build that model.
Once we start working inside of JMP, we're going to look at how we build a model with the data, how we maximize desirability or optimize the gauge setting conditions. We'll go through the simulator to help us, again, better visualize what this model is telling us, and we run some experimental verification, so we will review those results as well.
What are we looking inside of these results? What do we really need to analyze? First is the accuracy. We got one slide on here to talk about the accuracy, and again, verifying that the method doesn't impact the nominal measurement. A good example of this is if you're measuring thin films, if you're using a pair of calipers, you can actually squeeze the film down and end up throwing off the thickness, so measurement method can impact that accuracy. We want to verify we're not impacting accuracy.
Second thing is gauge variation. This is one of the critical things that we laid out earlier on. We want to meet or reduce the variation compared to the baseline, and then measurement time. We want to increase the throughput or decrease the measurement time, and we need all three of these to really be successful for this case study.
Looking at the accuracy, we've summarized this down just to show you one graph here. Instrument outputs a lot of data, so just looking at the radius of curvature, our tolerance here is about plus or minus 10 microns, so keeping your tolerances in mind is always important.
Looking at part number two, specifically, across all the methods, except for maybe method number six, we're within really a range about plus or minus one micron, and knowing our spec limit or our tolerance limit, we feel very confident after reviewing this data that the accuracy was not impacted by method, so this is, we got check mark one for what's important from our results.
Now, we're going to be utilizing JMP to actually build this models for gauge variation measurement time. These are some videos that have been prerecorded and dropped into the PowerPoint here, so bear with us if we have any typo issues as we work through it.
All right, so first, just want to show you the data table, and I've done some pre-work on this, so radius curvature is what's output from the gauge. We've turned that into the gauge variation using the gauge on our platform inside of JMP, and then measurement time. Measurement time is recorded when we are collecting the data.
I've grouped those two variables together as the output variable. I've also grouped together the three input variables, so Radial Pitch, Azimuthal Pitch, and Rotational Velocity, and degrees per second is that Rotational Velocity. Once we've got our data table, we're going to pull up the fit model platform to actually, start modeling how our input variables impacts our output variables.
In that model, we're going to specify our output variables in the Y-role and our input variables. We're going to choose that response surface macro based on how we set up the DOE. When we choose a response surface macro, it puts all those effects in there that we were testing in the DOE. However, it has added that Radial Pitch cross with Azimuthal Pitch that was left out of the DOE, and we're going to see why that's important, as soon as we run this model, you should see an error that pops up.
This term, we left out the DOE. We didn't actually test for it, but we're going to try and build a model with it in there. When we run that model, the very first thing I want everyone to pay attention to is the singularity detail that pops up, so this singular detail is telling us, "Hey, something's not right with your data set. You can't actually test all these effects that's in the model." That's because that Radial Pitch, Azimuthal Pitch was not included in the DOE setup.
If I take that term and I remove it from the model, now you can see that singular detail has disappeared. Once that detail has disappeared, we can now start looking at the p-values to test for statistical significance. I personally default to a threshold of about 0.05. Some data sets, I'll use 0.1 as a threshold. We're just going to use that 0.05 value for this case study.
I removed the highest p-value term first, and you can see that after, once I do that, the remaining p-values and the remaining effects become significant. These effects are going to stay in the model for the rest of what we look at today.
Next, we're going to be using a profiler to actually optimize the gauge settings, and you can pull a profiler underneath that red triangle drop down menu, and the very first thing we're going to do, is turn on the desirability functions.
The default is to maximize each output variable. If you work through the custom DOE setup, start to finish, it will remember the settings, the goal for each output variable. But we pulled this data up from about a year ago for this case study, so it defaults to maximize these output variables. You can very simply change those back. If you hold control and right click in that desirability window, you can change it from maximize to minimize.
Then once you set your goals for each output variables, then you can go back and maximize your desirability. Now we can look at the three input variables and what the optimized gauge settings are based on this model. These will become method eight in some future slides that I show you. This is the optimized gauge setting conditions.
One thing that I have ran into when I'm discussing JMP, and how you use JMP and modeling data is that this is one way to view the data. Some people struggle to use these profilers, so there is better ways to visualize this, and we're going to use a simulator to better visualize this data.
Underneath the prediction profiler we can pull up the simulator, and this is going to allow us to test different input settings for each variable. I'm going to change our input settings to just randomly look at different settings for the inputs. If you hold control and click, it'll multi select things as you're moving through. I like to use uniform. Your distribution type really just depends on your data and your gauge or your setup. Then these min lower and upper just defaults to the min max limits. That's actually in the data table.
Then I'm going to simulate this into a new table, and once that gets into a new data table, you can use any graphing platform that you want. On the next slide, we're going to look at a contour plot to see how this model and this data actually defines our design space.
Now you can see a contour plot for measurement time on the left and a contour plot for gauge variation on the right are two output variables. The Y and X-axis is the same axis that we're looking at earlier with Jake in the design space, and then I've overlaid what the constraint is along with the seven different DOE methods.
I've also put in here the optimized gauge setting conditions based on when we maximize the desirability. This, I think, really helps to show and helps people visualize what does the design space look like for our gauge output settings, and now these variables, we were lucky that the model for each was actually pretty similar. This contour plot can be very different. It's really just based on your system.
Now we're going to look at verification for that global optimum. The seven DOE methods are shown here in gray, and the optimized method is shown in red. That's method eight, and you can see, for both gauge variation and measurement time, we had a reduction there compared to the DOE results.
This is exactly what we're looking for from our case study. We feel we were very glad when we saw these results. You start with a DOE abstract. At first, you start looking at the results, and then you build that model. It's still a little bit abstract until you really bring it home here with verification, so this is really good, positive results for us.
All right, so, to end with conclusions, a Gauge R&R was combined with the DOE to rapidly and successfully increase our gauge measurement throughput, and we estimated we reduced the development time by about three weeks. We also were able to include custom constraints in the design matrix to prevent operation outside of our physical gauge limits. Then lastly, we were able to produce an optimized method to creating a statistically valid model based on our DOE results.
Thank you guys for listening. I'm going to end here with acknowledgments to some of our funding partners. Thank you.
Presenters
Skill level
- Beginner
- Intermediate
- Advanced