The demand for robust and scalable lentiviral vector manufacturing processes for cell and gene therapy has driven the adoption of advanced methodologies. This presentation delves into an innovative approach to process development, focusing on a Quality by Design (QbD) strategy, with a particular emphasis on the powerful capabilities of JMP software.

In adherence to stringent regulatory requirements for Process Performance Qualification, our methodology seamlessly integrates traditional and modern principles while leveraging JMP as a critical tool during the Stage 1 Process Design phase of the Process Validation life cycle. Utilizing JMP for design of experiments (DOE) facilitates comprehensive characterization of the lentiviral vector manufacturing process, enabling precise identification of Critical Process Parameters (CPPs) and the establishment of Proven Acceptable Ranges (PARs).

By harnessing the statistical analysis and visualization features of the software, we ensure a data-driven approach to decision making, enhancing process understanding and control. This talk emphasizes the key role JMP plays in advancing the application of QbD principles to meet the evolving demands of bioprocessing.

 

 

Welcome to my talk. My name is Johann-Christoph Dettmann, and I would like to present you today our approach on using design of experiment to accelerate process development for lentiviral vector manufacturing. This mythology is currently successfully applied in customer project at our process development department in Gaithersburg, and was developed in cooperation with Thomas Little Consulting.

In my presentation today, I will complement also the established data analysis strategy by introducing also some alternative additional insights, which are, from my perspective, quite helpful. The presentation will be split into two parts. In the first part, I will give a general introduction into the topic, and afterwards, I will also give a little demo how we use JMP to evaluate DOEs.

But before I start, I want to give a brief overview of the core competencies we offer. Miltenyi Bioindustry is engaged as a CDMO for cell and gene therapy, and is also part of the Miltenyi Biotec Group. As we also apply our expertise to our own CAR T-cell products, we found our knowledge here on a very deep experience on GMP-compliant production of lentiviral vectors, and also cell products.

We offer also here a broad manufacturing platform for lentiviral vectors, and also beside this, we also can offer here... Our PD team supports here further optimization in process development or specific characterization studies, which really can help towards commercialization.

We also offer here standard analytical platforms that can also be further developed to establish here additional assays, and we also can support the technology and assay transfer to internal cell manufacturing facilities of our clients, and also can give here guidance with our regulatory experts.

Our headquarter is located here in Bergisch Gladbach, with facilities for lentiviral vector process development, and also for cell processing. We also have cell factories in Gaithersburg, San Jose, and in Shanghai. Our main facility for GMP-compliant vector production in Germany is located here in Teterow. We also have large-scale facilities for vector production in Gaithersburg, and here's also the headquarter located for lentiviral vector process development.

For vector production, we offer here a third-generation-based system, meaning we are following here the highest safety standards. Our process is also serum-free, and in suspension, can also be scaled, and either be run in the US or in Teterow, and as a global manufacturer for GMP-compliant vectors, we draw here really on an extensive expertise in successfully manufacturing hundreds of GMP batches with our platform process.

These slides shows now a rough overview of the product life cycle, which you should follow if you want to bring your product, for example, your cell and gene therapy product, towards commercialization. Based on this graph, I would like to show you which phase of this product life cycle we want to look into a little bit more deeper.

You would initially start with collecting knowledge about your product and the process to be developed, and the aim is here to compile a list of quality attributes that adequately reflect the quality of your product. Afterwards, respective phases for process development and process characterization will take place, which will then lead to the generation of the control strategy, and also the design space.

Then, the process performance qualification will take place, which is also known as process validation. However, this step is not to be seen as a final qualification step, it's more the starting point of an ongoing verification of this qualification status, also called as continued process verification.

Of course, we will not be able to talk about everything from this flow chart here, but I would like to focus today on the generation of the design space, what kind of methods we use to generate the design space. We will also talk about how to define critical process parameters for your process, so those parameters whose variability have an impact on critical quality attributes and therefore should be monitored and controlled. We will also talk about how to define proven acceptable ranges for your process parameters, which are defined as a characterized range of a process parameter for which operation within this range, while keeping other parameters constant, will result in producing a material of relevant quality.

But before going into this, I would like to add some additional words to the design space. A schematic overview is given here on the left, with the white area representing our in-spec area, and the shaded area, our out-of-spec region. As far as I know, the design space is not a mandatory requirement for process validation. However, authorities are highly recommending to have the design space in place.

It is a concept which is outlined in the ICH Q8 as a multidimensional combination and interaction of input variables that provide assurance of quality. It is established through a systematic approach involving the optimization of critical material attributes, and also process parameters, using tools like design of experiment. The design space furthermore ensures that the process operates within defined parameters to consistently meet the quality standards and product specifications.

It also allows for flexibility and process control while maintaining the product quality, as it really enables the identification of potential failure modes and operational limit. That means that from the design space, we are finally able to define our specific parameter ranges, our proven acceptable ranges.

But how do we come to generate our design space? We would initially start here with a low level risk assessment. Of course, there are several steps before this risk assessment concerning scoping of product and process. However, this will be not part of today's talk.

The low level risk assessment involves here risk based estimation of potentially influential process parameters, and we will start here with listing relevant process parameters and correlate them to the respective response. Afterwards, the assessment is based on considering the extent to which the variation of a process parameter might have an impact on our responses.

For example, if you assume here a very high impact of a process parameter, or if we do not have any prior knowledge about a potential influence, we would accordingly define here a high risk score. However, when we already know that this process parameter is not influencing, or if there's high assurance about this, then we can define here or annotate a low risk score. This really helps this mythology to isolate potential non-influencing process parameters, which then could be excluded from the DOE study.

The same exercise can also be done for higher order terms, meaning two-factor interactions or quadratics. Here again, scientific rationales and prior knowledge can really help to identify which of the higher order terms should be included in the study or which were not.

In both assessments now, the assessment for main effects and also the assessment for the interactions ultimately enable us a very clear definition of the DOE study design, because simply speaking, you can say that for every model term you want to investigate, an additional experimental run is required, and through this approach, it becomes really possible to create here a specific design which is really tailored to your specific requirements.

To generate our DOE study, we use the custom DOE platform of JMP. After we have now designed our DOE study, people can then go into the lab and perform all the experiments, collect all the data, and use this then to define our statistical model, which is the representation that describes the relationship between our input variables, the factors, and our response. This can then be used to define our critical process parameters, because we assume that risk assessment alone should not be used to detect and control CPPs. This can only be based on measurements of how the factors directly influence the response. DOE is here clearly the method of choice, as it really helps to isolate the influence of every factor and interaction on the critical response.

To define now our CPPs, we would, on the one hand, need our scaled estimates to get finally the effect size of our factors, and then we would also need the tolerance range, which is the area between the upper specification limit and the lower specification limit. Finally, we then determine these factors, which shift the mean of the response by more than 20% of tolerance in our DOE study, and these we consider then as critical process parameters. I will say it again, factors that shift the mean of the response by more than 20% of tolerance in our DOE study are considered as critical process parameters.

After defining now our CPPs for our unit operations, the design space is then used to define the specific factor ranges for our process. As the design space is very dynamic, and depending on the settings of the respective factors, we use here Monte Carlo simulations to simulate batch-to-batch variation and to explore the dynamic nature of the design space.

The simulation will include here three key sources of variation. On the one hand, the variation coming from the model, from characterized response or product, also the variation of each factor we have investigated, and also we will consider the residual variation, which was not accounted for by the model, also known as root-mean-square error. This could be, for example, the variation coming from the analytical method. This together is then used to, or with this information, JMP is then able to generate or to simulate thousands of experiments, which really help then to judge, to estimate the process capability of your process.

For example, in this example, we see that we have a defect rate or out-of-specification rate of above 33%, which is much too high, so we clearly need here some adaption of the process limits or the parameter limits.

To define our factor ranges, we use a script from our consultant, which is quite similar to the Design Space Profiler, and this script helps to visualize the simulated data, and also the failure rates which will occur from the process. The red dots represented here simulated experimental runs, which are out-of-specification, and the green dots represent our inspect runs.

I really like this graph because it allows really the evaluation to what extent the factors are really influencing our response. For example, for factor D, we can see that in correlation to response A, we see especially that at the higher end of the factor range, there's a higher portion of inspect runs. By manual now adapting these limits, we are able to define here process parameter ranges, which are more meaningful, more sensible, more capable to allow here a more robust production. Before, we had about around 33% of out-of-specification points, now we are below 1%, which is now really meeting our requirements. These limits we manually defined here, are then used as our proven acceptable ranges for process parameters.

We also define then our normal operating range, which are then based on historical data, equipment, and personal capability, also under consideration of the established proven acceptable ranges, meaning that these NOR limits normally should be tighter than the PARs.

This was now my presentation part. I will now shift this and start my presentation in JMP. I want to show you today an example which was also in one of our characterization studies. In this DOE, we had five different responses. We also investigated six different factors, and from our low-level risk assessment, we finally concluded to investigate, of course, all main effects, but we also decided to investigate some of the two-factor interactions, and also some of the quadratic terms, leading finally to a DOE study of about 16 runs.

To evaluate our DOE study, we normally use the fit model platform of JMP, and before we go into specific model evaluation, we would always initially start with a data quality check. That means that we are looking at the residuals of our model to see if there are specific patterns in the residuals, which would indicate the presence of an unknown active effect. But in the lead, the residuals should be randomly scattered around the zero line. Here's the blue line indicating here normal distribution of the residuals, which is the case in our situation.

To screen for outliers, we use the externally studentized residuals, Here is recommended to use the Bonferroni limits, as they provide here a more conservative approach, reducing by this the risk of falsely detecting an out-of-trend point, an outlier.

Here's some additional notes about handling of out-of-trend points, outliers, especially when you are working within the regulatory environment and are reporting, for example, to FDA. Even when you detect an outlier, meaning that this data point might be outside of these Bonferroni limits, this doesn't mean that you automatically should exclude this data point from your data set. It's always recommended to have here a scientific rationale for this outlier exclusion in place, meaning, for example, undocumented handling error in the process, or also maybe a mistake in the analytical method.

When you do not have this in place, you can also consider to perform a model comparison, meaning to compare the model with and without outlier, and this comparison should then be supported by an argumentation why the model without outlier is more representative for your process. But only exclusion of data points on a statistical basis is not recommended.

After data quality check, we will then perform our model regression. Therefore, we use stepwise backward regression, which means that we are removing terms from our model, which are non-significant, and we are following here the principle of effect hierarchy, meaning that we are only allowed to remove main effects if they are not contained in a higher order term, like in quadratic or interaction.

This is, for example, marked here with this little sign. This illustrates us or gives us the information that this main effect is also part of an interaction, so we would not be allowed to exclude this main effect unless this interaction would also be excluded, so this interaction, but we can exclude. We can also exclude this main effect, that it is not part of a higher order term.

Yes, our threshold for removing terms from the model is at a p-value of 0.1, and in situations where we are very close to this threshold, we also consider here the root-mean-square error of our model, which means that when we remove this term and the RMSE is decreasing, then this exclusion would be warranted as we want to reduce unknown variation, which is represented by the root-mean-square error.

However, when we remove this term, and RMSE is increasing, which is now the case, we would not exclude this data point, as it is obviously contributing to explaining model variation, or yeah, model variation in our experiment.

After having now our final model, we will then save the prediction formula, and also the prediction formula for the standard error, which is later on important for defining our proven acceptable ranges. But before we go into this, we would first define our critical process parameters, and therefore we need to ensure that the scaled estimate table is opened, which is now the case, and then we use an add-in of our consultant, a specific script, which helps to identify critical process parameters.

This is how it works. We first would need to select the FIT model report where the scaled estimate table is opened, then we would need to define the specification limits for our response, which is currently investigated. Normally, or ideally, you would already have specification limits predefined. However, when you are, for example, investigating a unit operation within the process, these limits for this intermediate product might not exist yet. In such a case, we try to consider here at scale data, and then use the minimum and maximum values from these batches who finally passed the final product specification.

If this is also not available, we have then a default approach to define the specification limits, which is for the lower specification limit to use 80% of the DOE process mean, and for the upper specification limit to use 150% of the process mean. But in our situation, we are lucky we have specification limits in place, which is for the lower limit, 150, and for the upper limit, 350.

We then can proceed. The next step would then be to define the threshold for the determination of critical process parameters. Before I said that we say that at the mean shift of 20% of tolerance, we declare process parameters as critical, but you can also adapt this to your requirements increases or decrease, whatever you want, but we will stay in our situation with the 20%.

Finally, we then will click Run. The script is then performing the CPB calculation, which is done by using the scaled estimate or the coefficients, or also known as half effects, and multiply them with a specific multiplier. For main effects, we apply a multiplier of two. This is also true for interactions, and for quadratic effects, we use here a multiplier of one.

Doing so, the full effect is then calculated, and then this full effect is related to the tolerance range, so the area between upper and lower specification limit, and if the full effect is more than 20% of this tolerance range, we would declare this term as a critical process parameter, which is true for the first three terms here in our model.

After defining now our critical process parameters, this exercise is of course done for all of our responses in our model. We will then use the profiler to define our proven acceptable ranges for these process parameters. I've done here some pre-filling, but I will briefly explain what I've done here. I opened the simulator option here on the menu from the profiler, and then I also added here the respective RMSE values, so the random noise of the responses.

Also, in addition, I also defined a triangular distribution. This is initially performed here for the respective factors with the peak located at the set point from the at-scale process, and the lower and upper ends represent here the extreme points from the DOE study.

Then we could easily simulate here our process and see how high is our defect rate? As we can see, it's quite high. It's about 33%, so there's clearly the need for some improvement.

In the past, we used the Edge of Failure script from our consultant to define our proven acceptable ranges. This is currently also done at the site at Lentigen, or at our process development site in Gaithersburg, because they are still using JMP 16. However, since JMP 17, you also have the Design Space Profiler in place, and this is now also a tool I would like to show you today.

I will open the Design Space Profiler, and what I really like here is that we can see in line or directly the inspect portion for the whole process. This is also given for our respective responses, so we can also see the individual inspect portion for every response, and now we have the possibility to either manually adapt our factor ranges, or we can do this automatically, which I will also do now.

You can see that the inspect portion is increasing, the factor ranges are decreasing, and our aim is finally to have here an inspect portion of at least 99%, which is now the case. You might come to the conclusion, "Wow, this is very tight. Oh my goodness, this is not practical for our manufacturing teams." Then of course, you can also readapt these limits based on your requirements.

For example, here, we have a lot of flexibility, we could tighten this range a little bit, and this also, and we see here, this even gives me a little bit more flexibility. This is very nice. This you can proceed until the point you are finally happy with your parameter ranges and your inspect portion. These limits we define afterwards, or we define by this procedure. You, of course, have then the possibility to reduce this to a more relevant decimal place, the respective factor ranges here, but these are then finally the proven acceptable ranges for the process.

We could then also tend to respective midpoints to the profiler, also the respective limits, and there we use then the normal with limits at three sigma option, and then we could reverify again our inspect portion of the process, which is, of course, again, above 99%, which was our aim, so very nice.

Finally, it's of course also very, very important and necessary to validate these limits with the manufacturing departments, because these limits, of course, have to be practical within routine production, and this is then finally discussed with some manufacturing departments.

I hope I was able to show you that we have established a very meaningful approach to use the design space for PAR and CPP definition. We initially talked about a risk-based approach to really define a tailored design for DOE studies, which is really, really based on your specific requirements you have. We also talked about how to define critical process parameters, considering also the actual impact on our CQAs. We have also talked about how to define proven acceptable ranges for your process, and how to use the Design Space Profiler for this exercise.

As I said, we are also using this mythology for our own products and are also very happy that one of our clients last year also passed a BLA submission using this approach. With this said, I would like to thank you for your attention.

Presenter

Skill level

Intermediate
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎12-15-2024 08:24 AM by Community Manager Community Manager | Updated on ‎03-18-2025 01:12 PM

The demand for robust and scalable lentiviral vector manufacturing processes for cell and gene therapy has driven the adoption of advanced methodologies. This presentation delves into an innovative approach to process development, focusing on a Quality by Design (QbD) strategy, with a particular emphasis on the powerful capabilities of JMP software.

In adherence to stringent regulatory requirements for Process Performance Qualification, our methodology seamlessly integrates traditional and modern principles while leveraging JMP as a critical tool during the Stage 1 Process Design phase of the Process Validation life cycle. Utilizing JMP for design of experiments (DOE) facilitates comprehensive characterization of the lentiviral vector manufacturing process, enabling precise identification of Critical Process Parameters (CPPs) and the establishment of Proven Acceptable Ranges (PARs).

By harnessing the statistical analysis and visualization features of the software, we ensure a data-driven approach to decision making, enhancing process understanding and control. This talk emphasizes the key role JMP plays in advancing the application of QbD principles to meet the evolving demands of bioprocessing.

 

 

Welcome to my talk. My name is Johann-Christoph Dettmann, and I would like to present you today our approach on using design of experiment to accelerate process development for lentiviral vector manufacturing. This mythology is currently successfully applied in customer project at our process development department in Gaithersburg, and was developed in cooperation with Thomas Little Consulting.

In my presentation today, I will complement also the established data analysis strategy by introducing also some alternative additional insights, which are, from my perspective, quite helpful. The presentation will be split into two parts. In the first part, I will give a general introduction into the topic, and afterwards, I will also give a little demo how we use JMP to evaluate DOEs.

But before I start, I want to give a brief overview of the core competencies we offer. Miltenyi Bioindustry is engaged as a CDMO for cell and gene therapy, and is also part of the Miltenyi Biotec Group. As we also apply our expertise to our own CAR T-cell products, we found our knowledge here on a very deep experience on GMP-compliant production of lentiviral vectors, and also cell products.

We offer also here a broad manufacturing platform for lentiviral vectors, and also beside this, we also can offer here... Our PD team supports here further optimization in process development or specific characterization studies, which really can help towards commercialization.

We also offer here standard analytical platforms that can also be further developed to establish here additional assays, and we also can support the technology and assay transfer to internal cell manufacturing facilities of our clients, and also can give here guidance with our regulatory experts.

Our headquarter is located here in Bergisch Gladbach, with facilities for lentiviral vector process development, and also for cell processing. We also have cell factories in Gaithersburg, San Jose, and in Shanghai. Our main facility for GMP-compliant vector production in Germany is located here in Teterow. We also have large-scale facilities for vector production in Gaithersburg, and here's also the headquarter located for lentiviral vector process development.

For vector production, we offer here a third-generation-based system, meaning we are following here the highest safety standards. Our process is also serum-free, and in suspension, can also be scaled, and either be run in the US or in Teterow, and as a global manufacturer for GMP-compliant vectors, we draw here really on an extensive expertise in successfully manufacturing hundreds of GMP batches with our platform process.

These slides shows now a rough overview of the product life cycle, which you should follow if you want to bring your product, for example, your cell and gene therapy product, towards commercialization. Based on this graph, I would like to show you which phase of this product life cycle we want to look into a little bit more deeper.

You would initially start with collecting knowledge about your product and the process to be developed, and the aim is here to compile a list of quality attributes that adequately reflect the quality of your product. Afterwards, respective phases for process development and process characterization will take place, which will then lead to the generation of the control strategy, and also the design space.

Then, the process performance qualification will take place, which is also known as process validation. However, this step is not to be seen as a final qualification step, it's more the starting point of an ongoing verification of this qualification status, also called as continued process verification.

Of course, we will not be able to talk about everything from this flow chart here, but I would like to focus today on the generation of the design space, what kind of methods we use to generate the design space. We will also talk about how to define critical process parameters for your process, so those parameters whose variability have an impact on critical quality attributes and therefore should be monitored and controlled. We will also talk about how to define proven acceptable ranges for your process parameters, which are defined as a characterized range of a process parameter for which operation within this range, while keeping other parameters constant, will result in producing a material of relevant quality.

But before going into this, I would like to add some additional words to the design space. A schematic overview is given here on the left, with the white area representing our in-spec area, and the shaded area, our out-of-spec region. As far as I know, the design space is not a mandatory requirement for process validation. However, authorities are highly recommending to have the design space in place.

It is a concept which is outlined in the ICH Q8 as a multidimensional combination and interaction of input variables that provide assurance of quality. It is established through a systematic approach involving the optimization of critical material attributes, and also process parameters, using tools like design of experiment. The design space furthermore ensures that the process operates within defined parameters to consistently meet the quality standards and product specifications.

It also allows for flexibility and process control while maintaining the product quality, as it really enables the identification of potential failure modes and operational limit. That means that from the design space, we are finally able to define our specific parameter ranges, our proven acceptable ranges.

But how do we come to generate our design space? We would initially start here with a low level risk assessment. Of course, there are several steps before this risk assessment concerning scoping of product and process. However, this will be not part of today's talk.

The low level risk assessment involves here risk based estimation of potentially influential process parameters, and we will start here with listing relevant process parameters and correlate them to the respective response. Afterwards, the assessment is based on considering the extent to which the variation of a process parameter might have an impact on our responses.

For example, if you assume here a very high impact of a process parameter, or if we do not have any prior knowledge about a potential influence, we would accordingly define here a high risk score. However, when we already know that this process parameter is not influencing, or if there's high assurance about this, then we can define here or annotate a low risk score. This really helps this mythology to isolate potential non-influencing process parameters, which then could be excluded from the DOE study.

The same exercise can also be done for higher order terms, meaning two-factor interactions or quadratics. Here again, scientific rationales and prior knowledge can really help to identify which of the higher order terms should be included in the study or which were not.

In both assessments now, the assessment for main effects and also the assessment for the interactions ultimately enable us a very clear definition of the DOE study design, because simply speaking, you can say that for every model term you want to investigate, an additional experimental run is required, and through this approach, it becomes really possible to create here a specific design which is really tailored to your specific requirements.

To generate our DOE study, we use the custom DOE platform of JMP. After we have now designed our DOE study, people can then go into the lab and perform all the experiments, collect all the data, and use this then to define our statistical model, which is the representation that describes the relationship between our input variables, the factors, and our response. This can then be used to define our critical process parameters, because we assume that risk assessment alone should not be used to detect and control CPPs. This can only be based on measurements of how the factors directly influence the response. DOE is here clearly the method of choice, as it really helps to isolate the influence of every factor and interaction on the critical response.

To define now our CPPs, we would, on the one hand, need our scaled estimates to get finally the effect size of our factors, and then we would also need the tolerance range, which is the area between the upper specification limit and the lower specification limit. Finally, we then determine these factors, which shift the mean of the response by more than 20% of tolerance in our DOE study, and these we consider then as critical process parameters. I will say it again, factors that shift the mean of the response by more than 20% of tolerance in our DOE study are considered as critical process parameters.

After defining now our CPPs for our unit operations, the design space is then used to define the specific factor ranges for our process. As the design space is very dynamic, and depending on the settings of the respective factors, we use here Monte Carlo simulations to simulate batch-to-batch variation and to explore the dynamic nature of the design space.

The simulation will include here three key sources of variation. On the one hand, the variation coming from the model, from characterized response or product, also the variation of each factor we have investigated, and also we will consider the residual variation, which was not accounted for by the model, also known as root-mean-square error. This could be, for example, the variation coming from the analytical method. This together is then used to, or with this information, JMP is then able to generate or to simulate thousands of experiments, which really help then to judge, to estimate the process capability of your process.

For example, in this example, we see that we have a defect rate or out-of-specification rate of above 33%, which is much too high, so we clearly need here some adaption of the process limits or the parameter limits.

To define our factor ranges, we use a script from our consultant, which is quite similar to the Design Space Profiler, and this script helps to visualize the simulated data, and also the failure rates which will occur from the process. The red dots represented here simulated experimental runs, which are out-of-specification, and the green dots represent our inspect runs.

I really like this graph because it allows really the evaluation to what extent the factors are really influencing our response. For example, for factor D, we can see that in correlation to response A, we see especially that at the higher end of the factor range, there's a higher portion of inspect runs. By manual now adapting these limits, we are able to define here process parameter ranges, which are more meaningful, more sensible, more capable to allow here a more robust production. Before, we had about around 33% of out-of-specification points, now we are below 1%, which is now really meeting our requirements. These limits we manually defined here, are then used as our proven acceptable ranges for process parameters.

We also define then our normal operating range, which are then based on historical data, equipment, and personal capability, also under consideration of the established proven acceptable ranges, meaning that these NOR limits normally should be tighter than the PARs.

This was now my presentation part. I will now shift this and start my presentation in JMP. I want to show you today an example which was also in one of our characterization studies. In this DOE, we had five different responses. We also investigated six different factors, and from our low-level risk assessment, we finally concluded to investigate, of course, all main effects, but we also decided to investigate some of the two-factor interactions, and also some of the quadratic terms, leading finally to a DOE study of about 16 runs.

To evaluate our DOE study, we normally use the fit model platform of JMP, and before we go into specific model evaluation, we would always initially start with a data quality check. That means that we are looking at the residuals of our model to see if there are specific patterns in the residuals, which would indicate the presence of an unknown active effect. But in the lead, the residuals should be randomly scattered around the zero line. Here's the blue line indicating here normal distribution of the residuals, which is the case in our situation.

To screen for outliers, we use the externally studentized residuals, Here is recommended to use the Bonferroni limits, as they provide here a more conservative approach, reducing by this the risk of falsely detecting an out-of-trend point, an outlier.

Here's some additional notes about handling of out-of-trend points, outliers, especially when you are working within the regulatory environment and are reporting, for example, to FDA. Even when you detect an outlier, meaning that this data point might be outside of these Bonferroni limits, this doesn't mean that you automatically should exclude this data point from your data set. It's always recommended to have here a scientific rationale for this outlier exclusion in place, meaning, for example, undocumented handling error in the process, or also maybe a mistake in the analytical method.

When you do not have this in place, you can also consider to perform a model comparison, meaning to compare the model with and without outlier, and this comparison should then be supported by an argumentation why the model without outlier is more representative for your process. But only exclusion of data points on a statistical basis is not recommended.

After data quality check, we will then perform our model regression. Therefore, we use stepwise backward regression, which means that we are removing terms from our model, which are non-significant, and we are following here the principle of effect hierarchy, meaning that we are only allowed to remove main effects if they are not contained in a higher order term, like in quadratic or interaction.

This is, for example, marked here with this little sign. This illustrates us or gives us the information that this main effect is also part of an interaction, so we would not be allowed to exclude this main effect unless this interaction would also be excluded, so this interaction, but we can exclude. We can also exclude this main effect, that it is not part of a higher order term.

Yes, our threshold for removing terms from the model is at a p-value of 0.1, and in situations where we are very close to this threshold, we also consider here the root-mean-square error of our model, which means that when we remove this term and the RMSE is decreasing, then this exclusion would be warranted as we want to reduce unknown variation, which is represented by the root-mean-square error.

However, when we remove this term, and RMSE is increasing, which is now the case, we would not exclude this data point, as it is obviously contributing to explaining model variation, or yeah, model variation in our experiment.

After having now our final model, we will then save the prediction formula, and also the prediction formula for the standard error, which is later on important for defining our proven acceptable ranges. But before we go into this, we would first define our critical process parameters, and therefore we need to ensure that the scaled estimate table is opened, which is now the case, and then we use an add-in of our consultant, a specific script, which helps to identify critical process parameters.

This is how it works. We first would need to select the FIT model report where the scaled estimate table is opened, then we would need to define the specification limits for our response, which is currently investigated. Normally, or ideally, you would already have specification limits predefined. However, when you are, for example, investigating a unit operation within the process, these limits for this intermediate product might not exist yet. In such a case, we try to consider here at scale data, and then use the minimum and maximum values from these batches who finally passed the final product specification.

If this is also not available, we have then a default approach to define the specification limits, which is for the lower specification limit to use 80% of the DOE process mean, and for the upper specification limit to use 150% of the process mean. But in our situation, we are lucky we have specification limits in place, which is for the lower limit, 150, and for the upper limit, 350.

We then can proceed. The next step would then be to define the threshold for the determination of critical process parameters. Before I said that we say that at the mean shift of 20% of tolerance, we declare process parameters as critical, but you can also adapt this to your requirements increases or decrease, whatever you want, but we will stay in our situation with the 20%.

Finally, we then will click Run. The script is then performing the CPB calculation, which is done by using the scaled estimate or the coefficients, or also known as half effects, and multiply them with a specific multiplier. For main effects, we apply a multiplier of two. This is also true for interactions, and for quadratic effects, we use here a multiplier of one.

Doing so, the full effect is then calculated, and then this full effect is related to the tolerance range, so the area between upper and lower specification limit, and if the full effect is more than 20% of this tolerance range, we would declare this term as a critical process parameter, which is true for the first three terms here in our model.

After defining now our critical process parameters, this exercise is of course done for all of our responses in our model. We will then use the profiler to define our proven acceptable ranges for these process parameters. I've done here some pre-filling, but I will briefly explain what I've done here. I opened the simulator option here on the menu from the profiler, and then I also added here the respective RMSE values, so the random noise of the responses.

Also, in addition, I also defined a triangular distribution. This is initially performed here for the respective factors with the peak located at the set point from the at-scale process, and the lower and upper ends represent here the extreme points from the DOE study.

Then we could easily simulate here our process and see how high is our defect rate? As we can see, it's quite high. It's about 33%, so there's clearly the need for some improvement.

In the past, we used the Edge of Failure script from our consultant to define our proven acceptable ranges. This is currently also done at the site at Lentigen, or at our process development site in Gaithersburg, because they are still using JMP 16. However, since JMP 17, you also have the Design Space Profiler in place, and this is now also a tool I would like to show you today.

I will open the Design Space Profiler, and what I really like here is that we can see in line or directly the inspect portion for the whole process. This is also given for our respective responses, so we can also see the individual inspect portion for every response, and now we have the possibility to either manually adapt our factor ranges, or we can do this automatically, which I will also do now.

You can see that the inspect portion is increasing, the factor ranges are decreasing, and our aim is finally to have here an inspect portion of at least 99%, which is now the case. You might come to the conclusion, "Wow, this is very tight. Oh my goodness, this is not practical for our manufacturing teams." Then of course, you can also readapt these limits based on your requirements.

For example, here, we have a lot of flexibility, we could tighten this range a little bit, and this also, and we see here, this even gives me a little bit more flexibility. This is very nice. This you can proceed until the point you are finally happy with your parameter ranges and your inspect portion. These limits we define afterwards, or we define by this procedure. You, of course, have then the possibility to reduce this to a more relevant decimal place, the respective factor ranges here, but these are then finally the proven acceptable ranges for the process.

We could then also tend to respective midpoints to the profiler, also the respective limits, and there we use then the normal with limits at three sigma option, and then we could reverify again our inspect portion of the process, which is, of course, again, above 99%, which was our aim, so very nice.

Finally, it's of course also very, very important and necessary to validate these limits with the manufacturing departments, because these limits, of course, have to be practical within routine production, and this is then finally discussed with some manufacturing departments.

I hope I was able to show you that we have established a very meaningful approach to use the design space for PAR and CPP definition. We initially talked about a risk-based approach to really define a tailored design for DOE studies, which is really, really based on your specific requirements you have. We also talked about how to define critical process parameters, considering also the actual impact on our CQAs. We have also talked about how to define proven acceptable ranges for your process, and how to use the Design Space Profiler for this exercise.

As I said, we are also using this mythology for our own products and are also very happy that one of our clients last year also passed a BLA submission using this approach. With this said, I would like to thank you for your attention.



Start:
Thu, Mar 13, 2025 05:00 AM EDT
End:
Thu, Mar 13, 2025 05:45 AM EDT
Salon 3-Rome
Attachments
0 Kudos