Choose Language Hide Translation Bar

Using DOE to Improve the Performance of a High-Speed Dynamic Seal

At Francis Medical, the cannula of our disposable delivery device is inserted into the urethra and then a small round catheter exits the cannula into the prostate where steam exiting from the catheter is used to ablate tissue. Due to hydrostatic pressure from the bladder, the dynamic seal between the cannula and catheter is the only barrier for a possible fluid ingress pathway into the delivery device, which is undesirable. 

Testing the dynamic seal on the bench is accomplished by pressurizing the cannula using a pneumatic pressure decay tester. While the output of this tester is continuous, the distribution is bimodal and is best modeled as a binary output: either the seal leaks or the seal does not leak. In JMP, there is no straightforward method to calculate the power and sample size to allow for comparison of different design of experiment (DOE) studies for an attribute output.

In this paper, we outline the process methodology we used to determine the number of factors to test and how many runs to complete, including a simulated power analysis. In addition, we discovered a unique way to condition the samples prior to pressure decay testing to create more dynamic seal failures than achieved through historical testing. In the end, the results of the experiment allowed us to make definitive design decisions with confidence and improve the dynamic seal performance by a factor of 25 from the current design.

 

 

Hello, everyone. I'm Rebecca Breuer, and I'm joined today by Chad Naegeli. Today, we'll be presenting using DOE to improve the performance of a high-speed dynamic seal.

A little bit about this dynamic seal. The high-speed dynamic seal is a part of our medical device, which uses water vapor ablation technology to treat prostate cancer, and the components of the device of interest in this DOE are shown on the screen. The way our device works is the cannula of the device is inserted into the urethra, and then the catheter is deployed into the prostate and small holes in the catheter allow steam to exit and ablate the surrounding tissue.

One thing to know is that the bladder exerts a hydrostatic pressure onto our device, and this could cause fluid to ingress into the device. In order to prevent fluid from ingressing, we require a seal to fill the space between the catheter and the cannula and prevent the fluid ingress.

We are able to test the effectiveness of that seal using a pressure decay test. The way we do that is we seal our device using a test fixture and then pressurize the device with a pneumatic pressure decay tester, and we're able to measure the pressure decay, which is equivalent to the seal leak rate. The issue that we were seeing was the seal leakage rate was higher than expected, so the goal of this DOE was to identify factors that would reduce the seal leak rate.

Now let's take a look at what that output from the pressure decay test looks like. On the screen here, you can see two distributions, and they're both displaying the same data just in different ways. The top distribution shows results from that pressure decay test in PSI, which is a continuous output. The bottom distribution shows that same data modeled as a binary output of a pass or a fail. Looking at the top distribution, you can see that the passing points are colored in green. This big bin right here corresponds to pass, and then the failing points are highlighted in red here. There's a very small portion of the tests that fail.

One thing that is important is that the continuous output that we get from the pressure decay test isn't necessarily relevant or useful to us. What we really care about is, does the test pass, meaning the seal doesn't leak, or does the test fail, meaning the seal leaks. For this DOE, we're going to model the outcome as a binary response.

Before we ran this DOE, we brainstormed all the possible factors that could impact the seal leak rate, and we organized them all into a fishbone diagram that we created with JMP, and we narrowed down that list to the six factors that we thought were most relevant for this case. You can see those factors, factors X1 through X6 listed here in their respective levels. Factors 1 through 5 each have 2 levels, and factor 6 has 3 levels. Now I'll hand it over to Chad.

Hi. I'm Chad Naegeli, and I'm going to talk a little bit about how we use the conditioning process to increase the number of failures that we had when we did the seal leak. There was five different parameters that we looked at. We looked at sterilization. Previously, without the conditioning process, there was no sterilization, and we added a sterilization process. In the before conditioning process, we had a single test cycle and we did multiple. For seal state, before we were testing it in the static state where the catheter didn't move, but in the conditioning process, we showed it or tested it in a dynamic state.

Lastly, for the seal temperature, instead of ambient temperature, we did the testing at elevated, and this conditioning process stressed the seal and created a 95% failure rate, which was much higher than what we had without the conditioning process. The reason we did this is to increase this statistical power and minimize the sample size.

As Rebecca said, for a binary output, we use the logistics equation to model that output. In our case, we had a 95% rate after the conditioning process. What we did was using the logistics equation, we solve for the Y-intercept as shown in the bottom left. Then we were able to create the table on the right by putting in different inputs for the base rate.

For the case that we just talked about, for 95, the Y-intercept would be 2.944. Then we looked at four different scenarios, like what is for different base rates at 75%, 50%, 25% or 1%, and then the corresponding Y-intercepts. We're going to use those numbers in a little bit here in a JMP presentation demo to show you how we used this to create a power simulation.

The next slide is, we're going to do a power simulation of JMP right now, and I'm going to share my screen right now, and created a JMP journal. What we're going to do is we're going to try to figure out what the sample size should be for this DOE. One of the things that we look at is power, and power is defined as the ability to detect and affect when there is an effect present. If we had too low of a sample size, and we did a DOE and there wasn't a strong effect, and we couldn't detect it because we didn't do enough samples, that would be a bad scenario.

The other scenario is if we would just run too many samples, and we could detect the effect, but then we've wasted resources. What we want to do is create the right sample size. The way we're going to do that with the binary foot* is with the simulation.

As a warm-up here, we're going to do a One-factor DOE, and I'm going to open this up here. We went to the custom designer and created a DOE with one factor, a two-level categorical factor, we did 10,000 runs, then I did this just to illustrate whether or not this formula works or not. I'm going to click on this file right here, and it's going to bring up this JMP table. You can see 10,000 rows, and I'm going to key it on this formula.

This is the same formula that I was showing you in the previous slide, and we put in our 2.944, which was from that table that I showed for 95% failure rate. What it's doing is it's just doing a random binomial distribution with one trial, and then it's using this formula. What we're going to do is just run a distribution here, and we're going to see if we're getting 95% failure. If I click on this right here, you can see for 10,000 runs, we got about 95%. It does verify that our formula is working correctly, and it's just a way to build confidence in what we're going to do next.

Next thing we're going to do is we're going to create… Rebecca just talked about a six-factor DOE. We're going to create that in JMP with six 2-level categorical factors. We're going to use a main effects model, and then if possible, second order interactions and then 16 runs. Then we're going to simulate. Most of the time, it's going to be a 95% failure rate. Then if X1 is active, we're going to say, what happens if it's a 75% leak rate for X1 if it's active?

I'm going to click on that. It's a script that I created under DOE custom design and created this DOE. Let's key in on this formula. This is what we created based on… It's a little bit more complicated. This is what I already showed you in that earlier simulation, but what we did is we created an if-then statement, and we said if X1, which is the first factor equals L1, then we're going to have a 95% failure rate. Else, we're going to have a 75% failure rate.

Let's look at that right now. That basically created either a 1 or 0 in this column, and then there was one trial. Let's run this model here. I'm going to show you what we did here. We put Y simulated and Y in trials in the Y row. For the model effects, we put X1 through X6, then for personality, we did a generalized linear model with the binomial distribution and the Logit Link Function. We're going to run that.

As we ran the model here, we want to key it on this platform right here where it says Effect Test. What we're going to do is, we're going to do a right click and we're going to do simulate. When we do that, we get Y simulated and Y simulated, it matches, and it's going to do 2,500 runs. If I did that right now, it would basically rerun this distribution or this model fit 2,500 time, but that's going to take too long. I'm going to close out of that. We already have done that. That's what we have right down here. I'm just going to click on that. If I were to hit run, this is the table that I would get, 2,500 rows, and we've got six different factors.

What we're going to do is run a distribution on those six factors, and then move this over, so everybody can see. We're going to key it on X1. We're going to go down to the very last table here where it says simulated power, and we're going to look for alpha 0.05, and we're going to look at the rejection rate column, and we get 23%. Twenty-three percent is our simulated power for this simulation. If we look at the other effects, you can see the rejection rates are very low. They're right around 0.1.

If I repeat that same simulation with a 95% Y-intercept, and then a 50%. If I do that really quick, then and run this power analysis, you can see right here for X1, we get 84%. Our power went from 23%-84%. It was a big increase. The other ones, that are not supposed to be active X2 through X6 are very low with the rejection rate. I'll do one more, and then I think everybody gets the idea of what we're going to do. Let's do one that's really… So 95% for the Y-intercept in 1% for X1. Let's just run that really quick. You can see here now the simulated power here for X1 is 100%. We have really good power.

What we did was, we created a couple other variables. We looked at changing the number of trials from 1-5. In all cases that I showed here, they all had one trial, but you could imagine for each experimental run, we could run multiple samples through each one. What we did was change that in the DOE and was able to run that. We summarized it right here.

The two simulations or the three that I showed you were 23% power, 84% power and then 100% power. We also tried it with different amounts of trials for each experimental condition, which this over here is the total sample size. Then the design that we ultimately picked was this one down here, which was still six-factor, but we did 48 runs and with only one trial. The reason we did this is it resolved some of the correlations that we had on the two-factor interactions, and we felt like this was just a better model for us.

That is how we created, our power simulation for one factor being active, and you can see here, we're able to detect a difference from 95% down to somewhere between 50%-75% seal leak rate and have a power of roughly 90%, which we feel like is good confidence going into this design. I'm going to hand it back to Rebecca, and she's going to talk about how we analyze the DOE.

There's a couple of different ways that we use to visualize our data and analyze the results from this DOE. Those are listed on the screen. The first three, the univariant visualization, the partition model, and the stepwise model averaging, we're really just to help us visualize the data, see where the biggest breaks in the data are, and see which terms in the model appear to have the biggest signal-to-noise ratio. Then we use the generalized linear model to actually predict which combination of factors and their levels would minimize the probability of seal leaks.

Now I'm going to go over to JMP, and show you how we did some of those analyses. The very first thing that we did was, we use a distribution platform and JMP, just to explore our factors. I've selected the result of a seal leak, and it looks like factor X6, level 9, factor X3 original, and factors X4 have a lot of samples that fall into this seal leak category.

Next, we created a partition modeling JMP. I've already created the model here using the partition model. We used a split size of four, because that split size was a good… It correlated to the different factors in our levels that were… The different treatments in our DOE. Then we split this model 5 times, and you can see over here on the left that we have one group where none of the seals leaked. It looks like factor X6, partial in full levels, and factor X3 enhanced as a good with will not leak. Then, if we scroll down to the column contributions, you can see that factor X6 was the most significant followed by X3, X1, and X4.

Then next, we use the model averaging feature and JMP. Using the fit model platform, we selected a stepwise model and used the model averaging feature. JMP created many, many models, in JMP. It looked at the top 5% of models that had the lowest AICC, and it averaged the terms, the coefficients for those terms. Those are summarized in this table here. We have 27 terms in our model, and it gives an estimate and standard error for each one of those terms.

What we're interested in is the signal-to-noise ratio. I have a formula here, and we divide the estimates by the standard error and take the standard deviation. That gives us a signal-to-noise ratio. Then using graph builder, we can graph that, and we can look at which fact or which terms in the model had the highest signal-to-noise ratio.

We're only looking at a subset of the terms here. The ones that had the highest signal-to-noise ratio, but it looks like factor X6, X3, and X4 all had signal-to-noise ratios above two, which is what we determined our cutoff would be. Then it also looks like the interaction between factors X4 and X6 are close to the… They're slightly less than two.

One downside about this model is that we can't use it for prediction because it has a continuous output, and we could end up with results that are less than 0 or greater than 1 which probability doesn't work that way, so we can't use this model for prediction.

Then finally, we created a generalized linear model, with only main effects included, using the binomial distribution and the logit link to predict which factors and combinations would minimize the occurrence of seal leak. I'm going to reduce the factors in my model. I'm going to start at the bottom with the factor with the highest P value, and I'm going to remove terms from my model until all of the remaining terms have a P value less than 0.05. When I do that, there are three turns left in the model. X6 is the most significant followed by X3 and X4.

Then, if we move down here to the prediction profiler, I've set the desirabilities, to minimize the occurrence of seal leaks. That's what we're looking at here. Probability that the result is a seal leak, then I've maximized my desirability. With this model, including factors X4, X3 and X6, and their levels of small, enhanced and full. This model is predicting less than 0.1% chance of seal leak, which is pretty good. Also, we'll note that the AICC of this model is 39.8.

Then, we're going to create another generalized linear model. But this time, we are going to include factor-to-factor interactions in the model. We're still going to use the binomial distribution logit link, but this time, we're going to turn on the bias adjustment. I've already reduced this model down to significant terms, just to save time. It looks like X6, X3, and X4 are the most significant which matches our other model. We also have some factor-to-factor interactions here.

Looking at this profiler, which I've set the desirabilities and maximize the desirability, just like I did the other model, looking at our significant factors and levels of design one, small, enhanced, no, and full, we're looking at a very, very small predicted chance of the seal leak going to nearly 0%.

Now I'll hop back over to the PowerPoint. This table is summarizing the four different models that I just went through and all of the factor levels that were shown to be most desirable. You can see that summarized for each model. Based on which factor level occurred most frequently across the four models, we were able to come up with an optimal group of settings for each factor.

X1, factor X3, X4, and X6 were significant in all of the models. We chose design 1, enhanced, small, and full as levels of those factors, then we didn't choose anything for factors X2 and X5 because our model showed that those levels weren't significant, or those factors weren't significant. Now I'll hand it back over to Chad.

Hi. Basically, Rebecca just showed the analysis of the DOE, and then we're just going to summarize what the original design was for factors X1-X 6. How the optimal levels are different from the original, and you can see there's a lot of change for the optimal. Then we ended up not picking the optimal design. We ended up picking the very last column on the right, which is a selected design.

The main changes for that work, if you remember, X3 was a very large effect. We went from original to enhanced. In X6, we went from none to partial. The other four factors we left largely the same except for X5, which didn't really have an opinion, but we wanted that for other reasons. That is what we picked for the design, and now we're going to talk about the conclusions.

When subjected to this extensive conditioning process where we were able to create, like, 95% failures in our samples, the original design predicted 94%, which is really close to that initial study that we did. With a fairly large, 95% confidence interval. The optimal design was the one that we showed in that middle column, and it predicted a very low leak rate. It also has a very tight 95% confidence interval.

The selective design, well, not quite as good as the optimal, still predicts a very low seal rate. You have to remember, this is for the extensive conditioning process. Our actual seal rate is going to be even much lower than that. If you compare the selected to the original, it's more than 25 times lower seal rate with this extensive conditioning process. We felt like this was a really good experiment, and it really matched. The results matched the subject-matter expert, which the seal manufacturer, they came up with a lot of these factors for us to test, and the results really matched what they were predicting would happen.

Rebecca's going to go to the next slide. We have references. We had a lot of help from JMP with helping us with this. Don McCormack from JMP and also our local rep helped us quite a bit with determining helping us with figuring out the sample size and this methodology. Don did a very nice, talk in March for Discovery Summit Europe, and it goes into a lot more detail, on how to determine sample size for pass-fail responses. Then we have a couple other references as well.

Lastly, on the last pages, we have an appendix that has the power simulation and also a summary of Rebecca's analysis. Then on the very last appendix C, we have some tips and tricks that we had learned from running large DOEs like this just to stay organized and make sure that you have good results. At this time, I'd like to thank you all for listening to our talk, and it was very successful on our end, and we're just really happy to give you these really good results and how we use JMP to solve our problem. Thank you again.