Quality by Design (QbD) is a systematic approach for building quality into a product. The Design Space Profiler in JMP helps solve the fundamental QbD problem of determining an optimal operating region that assures quality as defined by specifications associated with Critical Quality Attributes (CQAs) while still maintaining flexibility in production. 

In this demonstration, learn how to use the Design Space Profiler and the Simulator, tools within the Prediction Profiler, to find the design space and robust areas within the design space suited for normal use. A toxin neutralization bioassay example from the ICH Q14 Analytical Procedure Development guideline is used. The Prediction Profiler in JMP has long been a powerful tool for visualizing and optimizing models. The addition of the Design Space Profiler and the Simulator within the Prediction Profiler makes it an indispensable tool for high-quality product and process innovation.

 

 

Hello, my name is Laura Lancaster, and I am here today to talk about finding optimal operating regions with the JMP Design Space Profiler and Simulator. I feel confident that everyone attending JMP Discovery Summit has a strong desire to excel at innovation. Many of you work very hard to innovate new products and processes, and we, the JMP developers, work very hard to create software that accelerates and improves innovation.

Because of the pressures to innovate quickly, the manufacturing processes for these new products often need to be designed quickly. We need to use software that helps us design, model, and analyze our data so that we can reach our goals for producing high-quality products as quickly as possible. When we produce a product that is well-established and has manufacturing that is stable and capable, we might expect to see quality results like the Process Capability Report on this slide.

This is data from a drug manufacturing process whose batches are meeting specification 99% of the time. However, when we're starting out with a new product, we usually can't expect to get such great results at first, especially not just by luck. Instead, we're more likely to see results like this slide when we start out.

We may not know how to produce a high-quality product or how to do it consistently. This is especially true if we don't design, analyze, model, and predict well. We definitely want to improve from being out of spec 33% of the time, like in this report.

Thankfully, JMP has added a feature to the Prediction Profiler called the Design Space Profiler. It can help us quickly get from a messy and incapable initial process, like on the left here, to a stable and capable process, like on the right, in a structured and repeatable way. Let's walk through how this process works.

Once we have designed a new product, we begin by defining the quality characteristics that need to be achieved to ensure the desired quality. Then we identify which of those quality characteristics need to be within appropriate limits or ranges to ensure the product quality. In the pharmaceutical industry, these are often called critical quality attributes, or CQAs for short.

In our example, we have two critical quality attributes, yield, which needs to be greater than 75%, and drug impurity, which needs to be less than 1.25%. We can see in this example, starting out, that we're pretty incapable. We have 33% out of spec for yield and 16.5% out of spec for impurity. We're not doing very well at first.

To ensure that we're making a high-quality product, we need to think about which parts of our production process affect the outcomes of our product. We need to do experimental studies to determine which process parameters have a high impact on the critical quality attributes. These important process parameters, whose variability impacts our critical quality attributes, are called critical process parameters or CPPs for short.

If we're smart about how we go about these studies, the data that identified the critical process parameters will also give us a statistical prediction model that shows the mathematical relationship between the critical process parameters and the critical quality attributes.

Our next step is to design and run an experimental study to identify our critical process parameters. Once we finish the experiment, we build a model that identifies pH, temperature, and vendor as having a big impact on the amount of yield and impurity in our product. In this case, pH, temp, and vendor are our critical process parameters.

Now, since we were smart about how we designed our experiment for identifying the critical process parameters, we then use those results to find a statistical prediction model of our outputs, the critical quality attributes in terms of our critical process parameters.

The prediction model that we build is a least square's response surface model. This slide shows the prediction profiler for our prediction model, which we can use to visualize the mathematical relationship between the critical process parameters and the CQAs.

Now, once we have the prediction model, we then, in principle, could use like a crystal ball to explore the different outcomes in many different process parameter settings. We could then find the combinations of pH, temperature, and vendor that result in a product that has a high probability of being in spec.

The graph in this slide shows a scatter plot of the process parameter settings that have been randomly and uniformly varied across some initial reasonable ranges. We can use these simulated settings as input to our prediction model and then determine if the resulting product would be in-spec for both critical quality attributes or not. If we color the points according to whether the result is in-spec as green or out-of-spec as red, we get a graph like this one.

This graph nicely shows the relationship between the process parameter settings and getting quality product. Now, after looking at this data, and for business reasons, the company decides to go with the cheap vendor since it seems reasonable, and will also save them money.

The graph on the left shows the scatter plot when we only consider the cheap vendor. We can clearly see the region where the parameter settings result in quality product, which are green. The graph on the right, it is apparent that if we keep pH and temperature within the blue box, we will produce products that meets our quality requirements most of the time, so the blue box is our optimal operating region.

Now, the simulated points we used to find the optimal operating region were varied randomly and uniformly. But we know that our pH and temperature processes actually follow a normal distribution. In fact, our pH and temperature processes were initially run with distributions like the histograms on the left, resulting in product that was out of spec, 36.4% of the time, like you see in the scatter plot on the right.

But if we can change the target and the variation of our pH and temp processes to run within the center of our optimal operating region, the blue box that we found earlier, we can get quality product 99.9% of the time like we see in this scatter plot. Note that I placed spec limits on the histograms on the left to show the ranges of our optimal operating region to highlight our effort to center and vary within the design space, and then this would be the result.

It turns out that this idea for finding an optimal operating region has been around for a long time and was first published by Joseph Juran way back in 1992 in his book, Juran on Quality by Design. He wrote about a systematic approach for incorporating quality into the entire life cycle of a product, beginning at the design phase.

Historically, in many countries where the pharmaceutical industry must comply with regulatory agencies such as the FDA or the EMA, approval was required to be able to change any production process settings. This was costly and time-consuming for both the manufacturers and the regulators. It was recognized that Juran's ideas could be used to determine safe operating regions within which pharmaceutical manufacturing processes could vary without requiring a recertification.

A few years later, the industry adopted the Quality by Design principles, which I'll refer to as QBD for short. In 2005, the International Conference on Harmonization or ICH began publishing guidelines for these ideas. The ICH Q8R2 document is the main document that discusses design space. It formally defines the design space as a multidimensional combination and interaction of material attributes and process parameters that have been demonstrated to provide assurance of quality.

The approach is based on Juran's QBD ideas for how to apply design of experiments to this problem. JMP is the world's leading tool for the design and analysis of experiment, and JMP's best tool for interactive model visualization, exploration, and optimization is the Profiler. That Profiler now has a built-in capability called the Design Space Profiler, which helps us find and optimize design spaces for our processes.

I will be using some terminology and examples from the pharmaceutical industry in this talk, but the same problem is present in every innovative industry. The solution is the same regardless of the types of processes and products.

Now I would like to show how we can use the Design Space Profiler in JMP to find the optimal operating region or the Design Space. For the example, we have already been looking at in the introduction. Just to recap, for our drug manufacturing example, we had two critical quality attributes, yield and impurity. We had three critical process parameters, PH, temp, and vendor, and we had decided to only look at the cheap vendor, our goal is to produce quality product 99% of the time.

Now let's go to JMP to see how to use the Design Space Profiler to find our Design Space or optimal operating region for this product. Let's go to JMP. Now we're in JMP, and this is the data table that was created by JMP's Custom Design platform. When we designed our experiment, we ran the experiment, we populated our critical quality attribute columns with the experimental results, we found the prediction model, and I've saved a script of that prediction model to the data table.

I also wanted to point out that I've already saved the spec limits as column properties to the critical quality attribute columns. You can see right here. That is important because the Design Space Profiler has to know the spec limits of the critical quality attributes to run. If you don't add them as column properties, you'll be prompted later to add them.

Let's go ahead and run the prediction model script. It runs my model. You can see I have my least scores models for both of my responses or critical quality attributes, and it's open to the Prediction Profiler. Hopefully, all of you are familiar with the Prediction Profiler. It's such a wonderful tool for exploring the surfaces of your model, and for looking at interaction and for interactions of your factors and for optimizations.

Note that I've also set so that it would draw the spec limit on the Profiler. You see the lower spec limit for yield and the upper spec limit for impurity. I'm going to go ahead and lock the vendor at cheap because that's the only one we're going to consider. Now we'll only look at the response surface for the cheap vendor.

Let's go ahead and turn on the Design Space Profiler. To get to that, you go to the Prediction Profiler menu, and a few down, you'll see a Design Space Profiler. When you select that, it runs thousands of random uniform simulations in the background and produces this Design Space Profiler below the Prediction Profiler.

The first thing I want to point out is down here in the bottom right, you'll note that it did indeed bring in my spec limits I had as column properties. It also has added some error that it brought in from my models. It's the root mean squared error from my least squares models, and this is going to be used for my prediction models to add in some error, which we always recommend since our prediction model is not perfect. You can adjust these errors if you don't think these are what you want to use. But we always recommend that you do add some error to your prediction model.

Looking at the Design Space Profiler, you might notice it looks fairly similar to profilers that you are probably used to seeing with a few key differences. First, when you look over here at the Y-axis, you'll see that instead of a number, it just says InSpec Portion, and that's because that number is actually over here to the right. This is a really important number, this InSpec Portion. It's giving me the number of simulation points that are in spec for both of my critical quality attribute, both of my responses, and right now, we're at 38.79%. If you want to see the in-spec values for each response individually, that's over here to the right of your response fields.

The other thing that you might notice looks different is in each cell, instead of just one curve, we have two curves, and that's because we're looking for a lower and upper limit for each factor or each parameter. There's a handy legend over here to help us with these curves. The blue curve is the InSpec Portion as the lower limit changes, and the red curve is the InSpec Portion as the upper limit changes.

What we want All we want to do is we want to interact with this profiler to increase our InSpec Portion to figure out what design space is going to really give us good InSpec. What we want to do is we want to adjust the lower and upper limits. We want to move those into a direction where our InSpec is going to increase. For example, if I move the lower limit of my pH, that's going to increase my InSpec Portion. I would not want to lower the upper limit because that's going to go downward, and going to decrease my inspect portion.

The ways you can interact, you can move the markers, you can enter values below the cells, you can enter values in these fields over here, or you can use these two buttons that Move Inward and move outward. That's my favorite way to interact in the Design Space Profiler. If you click on this Move Inward button, it's going to find the move that gives you the biggest increase in InSpec Portion. It's looking for the steepest upward move.

If I click on that, notice that it moved my temp, the upper limit of my temp from 45 down to 43.5, and my InSpec Portion increased. You might also notice this Volume Portion that's saying that I started with 100% using all of my simulated data, and now we're down to 95% because I've cut off some of that initial design space.

If I click Move Inward again, it once again moves my temperature upper limit down a little bit. My InSpec goes up. Click it again. My goal is to try to get to 99% InSpec. While I do that, there are a couple of options in the Design Space Profiler menu that I like to use while I'm adjusting my design space.

The first option I like to use is this Make and Connect Random Table. What it does is it creates a new table of random data. Let's go ahead and turn it on. I'm going to stick with the 10,000 default values, I want to add random noise to my prediction model. Click Okay, and it creates a new data table of random data where it's marked as red if it's out of spec and green if it's in spec for both of my critical quality attributes, and it's selected if it's still within the design space.

What I like to do is I like to look at these plots that are saved as script in this table, so if I run scatter plot matrix Y, I get a scatter plot of my response space. If I run scatter plot matrix X, I get a scatter plot of my factor space. Now, note that this vendor is just jitter, so I'm going to turn off the jitter. Really, we're only interested in this top left scatter plot of the temp versus pH. This is vendor locked at cheap, so it's not really telling me a whole lot, so this is what I'm interested in.

I like to look at these graphs. Set these up. As I am adjusting my design space. If I continue to click Move Inward, notice how the design space, which is the shaded area, is decreasing here in the factor space and the number of out-of-spec points are decreasing. Let's see. If you want to make it even clearer, you can turn on the Connect Hide Mode feature, which completely hides any points that are outside of the current design space.

If I continue to click Move Inward, we'll see if we can get to 99% while still having a design space that I think looks reasonable that I can use. I'm going to keep clicking. Let's see how my red points are disappearing. That's good. Now I've hit 99.59%. I think that that design space looks like something I can probably achieve looking at all my plots.

The next thing I like to do is I like to send the center of this design space back to the Prediction Profiler, and I can do that with this Send Midpoints to Profiler option, so I can get a view of where that center is on my response surface. Sometimes I like to turn on the desirability to see what that looks like when I find that center of my design space.

The other thing, which is really important is, as we mentioned in the introduction, I actually think that my pH and temp parameters actually follow a normal distribution, whereas the Design Space Profiler simulated things from a uniform distribution. The nice thing is there's this option under the Design Space Profiler menu that says Send Limits to Simulator. If you select that, there are some options to send these limits back to turn on the simulator and the Profiler and send these limits back to the simulator.

I'm going to choose Normal with Limits at 3 Sigma. What that's going to do is it's going to turn on the simulator, make my process parameters normal, and try to set... It sets the mean at the center of the design space and sets the standard deviation so that the limits are at... They run at 3 Sigma. If I click Okay, it turned the simulator on with pH and temp set to normal. The means are at the center of the design space, and what it essentially did was it took the ranges of my design space, divided it by six so that it's running at 3 Sigma.

Now, if I click on Simulate to see what the defect rate looks like, given these are normally distributed within the design space, things look really good. I did want to note that we are still adding random error to our prediction model. That looks really good. I also like to use this Simulate to Table option in the simulator, which is going to create a whole new data table where my process parameters are distributed normally according to these distributions. I'm going to click on that.

There's a script that is automatically created in this table for distribution. If I run that, it's going to allow me to see the distribution of my critical quality attributes, and it gives me the capability report because I have spec limits in the column properties. I can quickly see that if I can get my processes running within the center of my design space with 3 Sigma limits, I'm going to have some very capable processes. I'm going to have a very good capability and non-conformance for both of my critical quality attributes.

One last thing I wanted to point out about the Design Space Profiler is once you find the design space that you're happy with, there is an option here under Save X Spec Limits, where you can save the current design space back to the data table as column properties for your critical process parameters, so if I click on that and go back to the original design table, you can see that for pH and temp, that design space has been saved as spec limit. I can just save the data table and then remember what my new design space ranges are.

Let's go back to the PowerPoint. Continue. That was a fairly simple and straightforward example, but I want to show you how we can use JMP for more complex examples. But before we go to our next example, I want to talk about another area of pharmaceutical development where QBD principles have been applied more recently, and that is analytical procedure development.

Analytical procedure development refers to the science and risk-based approach to developing and maintaining the suitability of analytical test methods over their life cycles. The ICH Q14 document applies the QBD framework to analytical procedure development, calling it the enhanced approach. Much of what is in that document is beyond the scope of this presentation, but I will show how the Design Space Profiler can be used to find an important concept in that document, the Method Operable Design Region, or MODR.

The MODR is defined as a combination of analytical procedure parameter ranges within which the analytical procedure performance criteria are fulfilled and the quality of the measured result is assured. Also, moving within an approved MODR does not require regulatory notification. We can think of the MODR as the design space or the optimal operating region, and we can use the Design Space Profiler to find it.

We can think of the analytical procedure parameters as critical process parameters and the analytical procedure performance criteria as critical quality attributes, and that's how I'll refer to those mainly in the rest of the talk. Note, once again, these ideas and the terminology come from the pharmaceutical industry, but they really can be applied much more broadly.

Let's look at the second example in the ICH Q14 guidelines to see how JMP and the Design Space Profiler could be used to find the MODR for an analytical procedure. The purpose of the analytical procedure in this example is to measure the relative potency of an anti-TNF-alpha monoclonal antibody in a drug. The potency assay should be able to detect a change and/or a shift in potency upon forced degradation.

Before I move on, I wanted to thank my colleague, Byron Wingerd, who created this example from the guidelines document. This is a reproduction of the Ishikawa diagram in the document. It shows the analytical procedure features that were considered during risk assessment. The parts of the process we will consider in this example are colored in purple. They are the assay execution, and the cell preparation.

In this example, we want to calculate the potency of the preparation by using a four parameter logistic curve model. On the left, we have the analytical procedure parameters, focusing on the self-prep and assay execution. These are our CPPs or critical process parameters. These were known for prior work. On the right, we have the analytical procedure performance criteria that are requirements for the dose-response curves. These are our critical quality attributes or CQA, so these are our logistic curve criteria.

We next designed an experiment using JMP's custom design platform with the CQA as responses and the CPP as factors, the CO₂ concentration and the incubation temperature were actually hard to change factors for our experiment, so we needed to create a split plot experiment with custom design and JMP.

We then ran the split plot experiment and collected results at eight serial dilution levels, which were needed for fitting the four parameter logistic dose response curves, so these are intermediary results that we are going to use to fit the logistic curves.

Next, we use JMP's Fit Curve platform to fit the four parameter logistic dose response curves, and then we collected the parameters and fit diagnostics, which were our experimental responses or our critical quality attributes. Once we finished the experiment, we then use JMP's Fit Model REML platform to fit mixed models for each of our five critical quality attribute responses.

Now that we have a prediction model, we're going to use the Design Space Profiler to determine the optimal operating region or the MODR for this analytical procedure. Let's go to JMP once again. All right. Cleaning us up.

This is the data table that was created by a custom designer when we created the split plot experiment. We ran the experiment, as I mentioned, and we populated the results of the experiment in our critical quality attribute columns. I've already added spec limits as column properties once again. I've saved our mixed model as a script. Let's go ahead and run that.

This runs the script. It creates our prediction model, which is a mixed model for each of our critical quality attributes, as you see here. It's open with the Profiler open. I can nicely see the response surface of my prediction model. Once again, let's go ahead and turn on the Design Space Profiler. Now, for this example, I'm going to cut down the simulations by just deleting a zero to make this faster, so we're not waiting too long for this to run. If I hadn't cut down the zero, it would take a little over a minute, which isn't that bad, but not needed for a presentation.

Once again, we get the Design Space Profiler right below the Profiler, and once again, I wanted to note that it brought in our spec limits from the column properties, and it added the root mean squared error from our models, and I'm just going to leave that. I think that seems good.

Other thing I want to note is that given these initial factor ranges, initially using 100% of our data, we're about 70.61% in spec. I can see that the parameter settings that we're probably going to want to adjust our incubation time and viability dye, at least at first. I'm going to go ahead and start clicking on Move Inward to adjust the design space. Indeed, it's moving incubation time and viability dye. I'm going to keep clicking.

I can also turn on the making connect random table, and look at the scatter plots of my response surface. I'm going to make a custom scatter plot for my parameters, so I don't look at the whole plot, which don't give me any information. There we go. Keep clicking Move Inward, see if we can get to 99%, possibly, while still having a reasonable design space region. You can see how the design space is shrinking. Keep clicking Move Inward, see if I can get to 99.

Incubation time is getting tight. Let's see if I can keep going. I just hit 99%. This is what the design space looks like. I can hit on, use Connect Hide Mode to see it a little more clearly. This is what the response surface scatter plot looks like. My incubation time does look pretty tight, as you can see, but I think that I can probably achieve that. I'm going to move the center of the design space, the midpoint, back to the Prediction Profiler, see what that looks like. That looks pretty reasonable. I'm going to turn on the desirability real quick, see what that looks like.

Then once again, I think that my process parameters actually are distributed normally and not uniformly. I'm going to, once again, send the limits to the simulator with limits at 3 Sigma. It automatically sets up the simulator to run within the design space with normal distribution at 3 Sigma. When I click Simulate, I get a defect rate that looks pretty good. I can also simulate to table to take a look at my critical quality attributes, distributions, and capability reports if I can run within that design space.

Upper Asymptote looks really good, Lower Asymptote, not as good, but not horrible. Everything else looks decent. The Lower Asymptote might be the one I'm the most concerned with for quality. Once again, I could save these back to my table to save them if I'm satisfied with this design space.

Very quickly, I'll go back to my PowerPoint slides. I Just want to give some concluding thoughts. The Prediction Profiler with its Design Space Profiler and Simulator is an indispensable tool for innovating high-quality products faster. Finding design spaces or optimal operating regions is applicable to many more applications than just the pharmaceutical industry. It's much broader than that.

Even within the pharmaceutical industry, finding optimal operating regions can be applied outside of the original scope of that design space document. Let's see. Sorry. As we saw in that last example, it can be applied to analytical process procedures and can be used to find an MODR as well, so very widely applicable. These are my references, and I once again want to thank Byron Wingerd for his help with that example. Thank you so much for attending.

Published on ‎12-15-2024 08:23 AM by Community Manager Community Manager | Updated on ‎03-18-2025 01:12 PM

Quality by Design (QbD) is a systematic approach for building quality into a product. The Design Space Profiler in JMP helps solve the fundamental QbD problem of determining an optimal operating region that assures quality as defined by specifications associated with Critical Quality Attributes (CQAs) while still maintaining flexibility in production. 

In this demonstration, learn how to use the Design Space Profiler and the Simulator, tools within the Prediction Profiler, to find the design space and robust areas within the design space suited for normal use. A toxin neutralization bioassay example from the ICH Q14 Analytical Procedure Development guideline is used. The Prediction Profiler in JMP has long been a powerful tool for visualizing and optimizing models. The addition of the Design Space Profiler and the Simulator within the Prediction Profiler makes it an indispensable tool for high-quality product and process innovation.

 

 

Hello, my name is Laura Lancaster, and I am here today to talk about finding optimal operating regions with the JMP Design Space Profiler and Simulator. I feel confident that everyone attending JMP Discovery Summit has a strong desire to excel at innovation. Many of you work very hard to innovate new products and processes, and we, the JMP developers, work very hard to create software that accelerates and improves innovation.

Because of the pressures to innovate quickly, the manufacturing processes for these new products often need to be designed quickly. We need to use software that helps us design, model, and analyze our data so that we can reach our goals for producing high-quality products as quickly as possible. When we produce a product that is well-established and has manufacturing that is stable and capable, we might expect to see quality results like the Process Capability Report on this slide.

This is data from a drug manufacturing process whose batches are meeting specification 99% of the time. However, when we're starting out with a new product, we usually can't expect to get such great results at first, especially not just by luck. Instead, we're more likely to see results like this slide when we start out.

We may not know how to produce a high-quality product or how to do it consistently. This is especially true if we don't design, analyze, model, and predict well. We definitely want to improve from being out of spec 33% of the time, like in this report.

Thankfully, JMP has added a feature to the Prediction Profiler called the Design Space Profiler. It can help us quickly get from a messy and incapable initial process, like on the left here, to a stable and capable process, like on the right, in a structured and repeatable way. Let's walk through how this process works.

Once we have designed a new product, we begin by defining the quality characteristics that need to be achieved to ensure the desired quality. Then we identify which of those quality characteristics need to be within appropriate limits or ranges to ensure the product quality. In the pharmaceutical industry, these are often called critical quality attributes, or CQAs for short.

In our example, we have two critical quality attributes, yield, which needs to be greater than 75%, and drug impurity, which needs to be less than 1.25%. We can see in this example, starting out, that we're pretty incapable. We have 33% out of spec for yield and 16.5% out of spec for impurity. We're not doing very well at first.

To ensure that we're making a high-quality product, we need to think about which parts of our production process affect the outcomes of our product. We need to do experimental studies to determine which process parameters have a high impact on the critical quality attributes. These important process parameters, whose variability impacts our critical quality attributes, are called critical process parameters or CPPs for short.

If we're smart about how we go about these studies, the data that identified the critical process parameters will also give us a statistical prediction model that shows the mathematical relationship between the critical process parameters and the critical quality attributes.

Our next step is to design and run an experimental study to identify our critical process parameters. Once we finish the experiment, we build a model that identifies pH, temperature, and vendor as having a big impact on the amount of yield and impurity in our product. In this case, pH, temp, and vendor are our critical process parameters.

Now, since we were smart about how we designed our experiment for identifying the critical process parameters, we then use those results to find a statistical prediction model of our outputs, the critical quality attributes in terms of our critical process parameters.

The prediction model that we build is a least square's response surface model. This slide shows the prediction profiler for our prediction model, which we can use to visualize the mathematical relationship between the critical process parameters and the CQAs.

Now, once we have the prediction model, we then, in principle, could use like a crystal ball to explore the different outcomes in many different process parameter settings. We could then find the combinations of pH, temperature, and vendor that result in a product that has a high probability of being in spec.

The graph in this slide shows a scatter plot of the process parameter settings that have been randomly and uniformly varied across some initial reasonable ranges. We can use these simulated settings as input to our prediction model and then determine if the resulting product would be in-spec for both critical quality attributes or not. If we color the points according to whether the result is in-spec as green or out-of-spec as red, we get a graph like this one.

This graph nicely shows the relationship between the process parameter settings and getting quality product. Now, after looking at this data, and for business reasons, the company decides to go with the cheap vendor since it seems reasonable, and will also save them money.

The graph on the left shows the scatter plot when we only consider the cheap vendor. We can clearly see the region where the parameter settings result in quality product, which are green. The graph on the right, it is apparent that if we keep pH and temperature within the blue box, we will produce products that meets our quality requirements most of the time, so the blue box is our optimal operating region.

Now, the simulated points we used to find the optimal operating region were varied randomly and uniformly. But we know that our pH and temperature processes actually follow a normal distribution. In fact, our pH and temperature processes were initially run with distributions like the histograms on the left, resulting in product that was out of spec, 36.4% of the time, like you see in the scatter plot on the right.

But if we can change the target and the variation of our pH and temp processes to run within the center of our optimal operating region, the blue box that we found earlier, we can get quality product 99.9% of the time like we see in this scatter plot. Note that I placed spec limits on the histograms on the left to show the ranges of our optimal operating region to highlight our effort to center and vary within the design space, and then this would be the result.

It turns out that this idea for finding an optimal operating region has been around for a long time and was first published by Joseph Juran way back in 1992 in his book, Juran on Quality by Design. He wrote about a systematic approach for incorporating quality into the entire life cycle of a product, beginning at the design phase.

Historically, in many countries where the pharmaceutical industry must comply with regulatory agencies such as the FDA or the EMA, approval was required to be able to change any production process settings. This was costly and time-consuming for both the manufacturers and the regulators. It was recognized that Juran's ideas could be used to determine safe operating regions within which pharmaceutical manufacturing processes could vary without requiring a recertification.

A few years later, the industry adopted the Quality by Design principles, which I'll refer to as QBD for short. In 2005, the International Conference on Harmonization or ICH began publishing guidelines for these ideas. The ICH Q8R2 document is the main document that discusses design space. It formally defines the design space as a multidimensional combination and interaction of material attributes and process parameters that have been demonstrated to provide assurance of quality.

The approach is based on Juran's QBD ideas for how to apply design of experiments to this problem. JMP is the world's leading tool for the design and analysis of experiment, and JMP's best tool for interactive model visualization, exploration, and optimization is the Profiler. That Profiler now has a built-in capability called the Design Space Profiler, which helps us find and optimize design spaces for our processes.

I will be using some terminology and examples from the pharmaceutical industry in this talk, but the same problem is present in every innovative industry. The solution is the same regardless of the types of processes and products.

Now I would like to show how we can use the Design Space Profiler in JMP to find the optimal operating region or the Design Space. For the example, we have already been looking at in the introduction. Just to recap, for our drug manufacturing example, we had two critical quality attributes, yield and impurity. We had three critical process parameters, PH, temp, and vendor, and we had decided to only look at the cheap vendor, our goal is to produce quality product 99% of the time.

Now let's go to JMP to see how to use the Design Space Profiler to find our Design Space or optimal operating region for this product. Let's go to JMP. Now we're in JMP, and this is the data table that was created by JMP's Custom Design platform. When we designed our experiment, we ran the experiment, we populated our critical quality attribute columns with the experimental results, we found the prediction model, and I've saved a script of that prediction model to the data table.

I also wanted to point out that I've already saved the spec limits as column properties to the critical quality attribute columns. You can see right here. That is important because the Design Space Profiler has to know the spec limits of the critical quality attributes to run. If you don't add them as column properties, you'll be prompted later to add them.

Let's go ahead and run the prediction model script. It runs my model. You can see I have my least scores models for both of my responses or critical quality attributes, and it's open to the Prediction Profiler. Hopefully, all of you are familiar with the Prediction Profiler. It's such a wonderful tool for exploring the surfaces of your model, and for looking at interaction and for interactions of your factors and for optimizations.

Note that I've also set so that it would draw the spec limit on the Profiler. You see the lower spec limit for yield and the upper spec limit for impurity. I'm going to go ahead and lock the vendor at cheap because that's the only one we're going to consider. Now we'll only look at the response surface for the cheap vendor.

Let's go ahead and turn on the Design Space Profiler. To get to that, you go to the Prediction Profiler menu, and a few down, you'll see a Design Space Profiler. When you select that, it runs thousands of random uniform simulations in the background and produces this Design Space Profiler below the Prediction Profiler.

The first thing I want to point out is down here in the bottom right, you'll note that it did indeed bring in my spec limits I had as column properties. It also has added some error that it brought in from my models. It's the root mean squared error from my least squares models, and this is going to be used for my prediction models to add in some error, which we always recommend since our prediction model is not perfect. You can adjust these errors if you don't think these are what you want to use. But we always recommend that you do add some error to your prediction model.

Looking at the Design Space Profiler, you might notice it looks fairly similar to profilers that you are probably used to seeing with a few key differences. First, when you look over here at the Y-axis, you'll see that instead of a number, it just says InSpec Portion, and that's because that number is actually over here to the right. This is a really important number, this InSpec Portion. It's giving me the number of simulation points that are in spec for both of my critical quality attribute, both of my responses, and right now, we're at 38.79%. If you want to see the in-spec values for each response individually, that's over here to the right of your response fields.

The other thing that you might notice looks different is in each cell, instead of just one curve, we have two curves, and that's because we're looking for a lower and upper limit for each factor or each parameter. There's a handy legend over here to help us with these curves. The blue curve is the InSpec Portion as the lower limit changes, and the red curve is the InSpec Portion as the upper limit changes.

What we want All we want to do is we want to interact with this profiler to increase our InSpec Portion to figure out what design space is going to really give us good InSpec. What we want to do is we want to adjust the lower and upper limits. We want to move those into a direction where our InSpec is going to increase. For example, if I move the lower limit of my pH, that's going to increase my InSpec Portion. I would not want to lower the upper limit because that's going to go downward, and going to decrease my inspect portion.

The ways you can interact, you can move the markers, you can enter values below the cells, you can enter values in these fields over here, or you can use these two buttons that Move Inward and move outward. That's my favorite way to interact in the Design Space Profiler. If you click on this Move Inward button, it's going to find the move that gives you the biggest increase in InSpec Portion. It's looking for the steepest upward move.

If I click on that, notice that it moved my temp, the upper limit of my temp from 45 down to 43.5, and my InSpec Portion increased. You might also notice this Volume Portion that's saying that I started with 100% using all of my simulated data, and now we're down to 95% because I've cut off some of that initial design space.

If I click Move Inward again, it once again moves my temperature upper limit down a little bit. My InSpec goes up. Click it again. My goal is to try to get to 99% InSpec. While I do that, there are a couple of options in the Design Space Profiler menu that I like to use while I'm adjusting my design space.

The first option I like to use is this Make and Connect Random Table. What it does is it creates a new table of random data. Let's go ahead and turn it on. I'm going to stick with the 10,000 default values, I want to add random noise to my prediction model. Click Okay, and it creates a new data table of random data where it's marked as red if it's out of spec and green if it's in spec for both of my critical quality attributes, and it's selected if it's still within the design space.

What I like to do is I like to look at these plots that are saved as script in this table, so if I run scatter plot matrix Y, I get a scatter plot of my response space. If I run scatter plot matrix X, I get a scatter plot of my factor space. Now, note that this vendor is just jitter, so I'm going to turn off the jitter. Really, we're only interested in this top left scatter plot of the temp versus pH. This is vendor locked at cheap, so it's not really telling me a whole lot, so this is what I'm interested in.

I like to look at these graphs. Set these up. As I am adjusting my design space. If I continue to click Move Inward, notice how the design space, which is the shaded area, is decreasing here in the factor space and the number of out-of-spec points are decreasing. Let's see. If you want to make it even clearer, you can turn on the Connect Hide Mode feature, which completely hides any points that are outside of the current design space.

If I continue to click Move Inward, we'll see if we can get to 99% while still having a design space that I think looks reasonable that I can use. I'm going to keep clicking. Let's see how my red points are disappearing. That's good. Now I've hit 99.59%. I think that that design space looks like something I can probably achieve looking at all my plots.

The next thing I like to do is I like to send the center of this design space back to the Prediction Profiler, and I can do that with this Send Midpoints to Profiler option, so I can get a view of where that center is on my response surface. Sometimes I like to turn on the desirability to see what that looks like when I find that center of my design space.

The other thing, which is really important is, as we mentioned in the introduction, I actually think that my pH and temp parameters actually follow a normal distribution, whereas the Design Space Profiler simulated things from a uniform distribution. The nice thing is there's this option under the Design Space Profiler menu that says Send Limits to Simulator. If you select that, there are some options to send these limits back to turn on the simulator and the Profiler and send these limits back to the simulator.

I'm going to choose Normal with Limits at 3 Sigma. What that's going to do is it's going to turn on the simulator, make my process parameters normal, and try to set... It sets the mean at the center of the design space and sets the standard deviation so that the limits are at... They run at 3 Sigma. If I click Okay, it turned the simulator on with pH and temp set to normal. The means are at the center of the design space, and what it essentially did was it took the ranges of my design space, divided it by six so that it's running at 3 Sigma.

Now, if I click on Simulate to see what the defect rate looks like, given these are normally distributed within the design space, things look really good. I did want to note that we are still adding random error to our prediction model. That looks really good. I also like to use this Simulate to Table option in the simulator, which is going to create a whole new data table where my process parameters are distributed normally according to these distributions. I'm going to click on that.

There's a script that is automatically created in this table for distribution. If I run that, it's going to allow me to see the distribution of my critical quality attributes, and it gives me the capability report because I have spec limits in the column properties. I can quickly see that if I can get my processes running within the center of my design space with 3 Sigma limits, I'm going to have some very capable processes. I'm going to have a very good capability and non-conformance for both of my critical quality attributes.

One last thing I wanted to point out about the Design Space Profiler is once you find the design space that you're happy with, there is an option here under Save X Spec Limits, where you can save the current design space back to the data table as column properties for your critical process parameters, so if I click on that and go back to the original design table, you can see that for pH and temp, that design space has been saved as spec limit. I can just save the data table and then remember what my new design space ranges are.

Let's go back to the PowerPoint. Continue. That was a fairly simple and straightforward example, but I want to show you how we can use JMP for more complex examples. But before we go to our next example, I want to talk about another area of pharmaceutical development where QBD principles have been applied more recently, and that is analytical procedure development.

Analytical procedure development refers to the science and risk-based approach to developing and maintaining the suitability of analytical test methods over their life cycles. The ICH Q14 document applies the QBD framework to analytical procedure development, calling it the enhanced approach. Much of what is in that document is beyond the scope of this presentation, but I will show how the Design Space Profiler can be used to find an important concept in that document, the Method Operable Design Region, or MODR.

The MODR is defined as a combination of analytical procedure parameter ranges within which the analytical procedure performance criteria are fulfilled and the quality of the measured result is assured. Also, moving within an approved MODR does not require regulatory notification. We can think of the MODR as the design space or the optimal operating region, and we can use the Design Space Profiler to find it.

We can think of the analytical procedure parameters as critical process parameters and the analytical procedure performance criteria as critical quality attributes, and that's how I'll refer to those mainly in the rest of the talk. Note, once again, these ideas and the terminology come from the pharmaceutical industry, but they really can be applied much more broadly.

Let's look at the second example in the ICH Q14 guidelines to see how JMP and the Design Space Profiler could be used to find the MODR for an analytical procedure. The purpose of the analytical procedure in this example is to measure the relative potency of an anti-TNF-alpha monoclonal antibody in a drug. The potency assay should be able to detect a change and/or a shift in potency upon forced degradation.

Before I move on, I wanted to thank my colleague, Byron Wingerd, who created this example from the guidelines document. This is a reproduction of the Ishikawa diagram in the document. It shows the analytical procedure features that were considered during risk assessment. The parts of the process we will consider in this example are colored in purple. They are the assay execution, and the cell preparation.

In this example, we want to calculate the potency of the preparation by using a four parameter logistic curve model. On the left, we have the analytical procedure parameters, focusing on the self-prep and assay execution. These are our CPPs or critical process parameters. These were known for prior work. On the right, we have the analytical procedure performance criteria that are requirements for the dose-response curves. These are our critical quality attributes or CQA, so these are our logistic curve criteria.

We next designed an experiment using JMP's custom design platform with the CQA as responses and the CPP as factors, the CO₂ concentration and the incubation temperature were actually hard to change factors for our experiment, so we needed to create a split plot experiment with custom design and JMP.

We then ran the split plot experiment and collected results at eight serial dilution levels, which were needed for fitting the four parameter logistic dose response curves, so these are intermediary results that we are going to use to fit the logistic curves.

Next, we use JMP's Fit Curve platform to fit the four parameter logistic dose response curves, and then we collected the parameters and fit diagnostics, which were our experimental responses or our critical quality attributes. Once we finished the experiment, we then use JMP's Fit Model REML platform to fit mixed models for each of our five critical quality attribute responses.

Now that we have a prediction model, we're going to use the Design Space Profiler to determine the optimal operating region or the MODR for this analytical procedure. Let's go to JMP once again. All right. Cleaning us up.

This is the data table that was created by a custom designer when we created the split plot experiment. We ran the experiment, as I mentioned, and we populated the results of the experiment in our critical quality attribute columns. I've already added spec limits as column properties once again. I've saved our mixed model as a script. Let's go ahead and run that.

This runs the script. It creates our prediction model, which is a mixed model for each of our critical quality attributes, as you see here. It's open with the Profiler open. I can nicely see the response surface of my prediction model. Once again, let's go ahead and turn on the Design Space Profiler. Now, for this example, I'm going to cut down the simulations by just deleting a zero to make this faster, so we're not waiting too long for this to run. If I hadn't cut down the zero, it would take a little over a minute, which isn't that bad, but not needed for a presentation.

Once again, we get the Design Space Profiler right below the Profiler, and once again, I wanted to note that it brought in our spec limits from the column properties, and it added the root mean squared error from our models, and I'm just going to leave that. I think that seems good.

Other thing I want to note is that given these initial factor ranges, initially using 100% of our data, we're about 70.61% in spec. I can see that the parameter settings that we're probably going to want to adjust our incubation time and viability dye, at least at first. I'm going to go ahead and start clicking on Move Inward to adjust the design space. Indeed, it's moving incubation time and viability dye. I'm going to keep clicking.

I can also turn on the making connect random table, and look at the scatter plots of my response surface. I'm going to make a custom scatter plot for my parameters, so I don't look at the whole plot, which don't give me any information. There we go. Keep clicking Move Inward, see if we can get to 99%, possibly, while still having a reasonable design space region. You can see how the design space is shrinking. Keep clicking Move Inward, see if I can get to 99.

Incubation time is getting tight. Let's see if I can keep going. I just hit 99%. This is what the design space looks like. I can hit on, use Connect Hide Mode to see it a little more clearly. This is what the response surface scatter plot looks like. My incubation time does look pretty tight, as you can see, but I think that I can probably achieve that. I'm going to move the center of the design space, the midpoint, back to the Prediction Profiler, see what that looks like. That looks pretty reasonable. I'm going to turn on the desirability real quick, see what that looks like.

Then once again, I think that my process parameters actually are distributed normally and not uniformly. I'm going to, once again, send the limits to the simulator with limits at 3 Sigma. It automatically sets up the simulator to run within the design space with normal distribution at 3 Sigma. When I click Simulate, I get a defect rate that looks pretty good. I can also simulate to table to take a look at my critical quality attributes, distributions, and capability reports if I can run within that design space.

Upper Asymptote looks really good, Lower Asymptote, not as good, but not horrible. Everything else looks decent. The Lower Asymptote might be the one I'm the most concerned with for quality. Once again, I could save these back to my table to save them if I'm satisfied with this design space.

Very quickly, I'll go back to my PowerPoint slides. I Just want to give some concluding thoughts. The Prediction Profiler with its Design Space Profiler and Simulator is an indispensable tool for innovating high-quality products faster. Finding design spaces or optimal operating regions is applicable to many more applications than just the pharmaceutical industry. It's much broader than that.

Even within the pharmaceutical industry, finding optimal operating regions can be applied outside of the original scope of that design space document. Let's see. Sorry. As we saw in that last example, it can be applied to analytical process procedures and can be used to find an MODR as well, so very widely applicable. These are my references, and I once again want to thank Byron Wingerd for his help with that example. Thank you so much for attending.



Event has ended
You can no longer attend this event.

Start:
Wed, Mar 12, 2025 06:45 AM EDT
End:
Wed, Mar 12, 2025 07:30 AM EDT
Salon 5-London
Attachments
0 Kudos