Relative potency assays are critical in evaluating the biological activity of drug products throughout the lifecycle of biopharmaceuticals. Managing variability, optimizing assay conditions, and ensuring consistent performance are essential but challenging.

This presentation explores how JMP supports developing, validating, and monitoring relative potency assays in CMC (chemistry, manufacturing, and controls).

By integrating data visualization, statistical analysis, modeling tools (such as logistic regression, and response surface methodology for establishing optimal assay conditions), and mixed-effects models (for estimating intermediate precision and relevant replication strategies), JMP enables robust assay development, procedure validation, and ongoing performance monitoring. The automation and scripting capabilities within JMP further streamline repetitive data analyses, facilitate method/operator performance assessments, and ongoing procedure performance verification (OPPV) of bioassays in a regulated environment.

 

 

The title of my presentation is The Use of JMP During the Lifecycle of a Relative Potency Asset for CMC. I work at Byondis. We are a company in Nijmegen in the east of the Netherlands. We are about 300–350 people. What we do is we create targeted medicines, targeted at mostly intractable cancers. The products we make are antibody drug, conjugate, and monoclonal antibodies.

Monoclonal antibodies are Y-shaped, complex macromolecular proteins. They are produced from recombinant cell culture expression systems, and these cells are genetically engineered to contain the necessary information for producing the desired product. This is a representation of an antibody-drug conjugate, where again, we have the Y-shaped, complex macromolecular structure. The drug being attached via linker being a chemical moiety.

This is a high-level overview of a biotech product lifecycle. We have research, we have development, and hopefully, we have commercialization. Our research colleagues are responsible for developing a so-called clinical candidate, which is a promising compound that has shown sufficient efficacy, and safety in preclinical testing. We find it sufficient to advance it into clinical trials.

The development part is what we call CMC, which stands for chemistry manufacturing and control. Here we develop our production process. We want this production process to result in a safe and hopefully, in humans efficacious drug, and we want to be of constant quality.

There are two groups mainly responsible for developing this process, which are the Upstream Processing Department, and the Downstream Processing Department. USP focuses on optimizing the conditions for the growth of the engine itself in bioreactors and they also are responsible for scaling up the production process. DSP handles the purification and isolation of the biological product from the cell culture and ensures that the final product is pure, is concentrated, and free from contaminants.

To develop their respective processes, these groups need to make informed decisions that are data-driven, scientifically sound and risk-based. A large part of the information they require is provided by the analytical development group. My group is part of the analytical development group and is called the potency assay group, or the group that develops functional assays.

Potency assays quantify by biological activity that's induced by the product upon interaction with the target biological substrate. We preferably want it to be mimicking the biological activity in the clinical situation.

This is the life cycle of a potency assay, or if you like, any analytical method. Where we design and develop the method, we demonstrate fitness for intended purpose, and we verify and control that this demonstrated fitness for intended purpose is maintained throughout the life cycle of the method.

As a preamble to design and development of the potency assay, we have to consider the following. An analytical method or analytical methods in general are essentially a process. Where the analytical process converts the input, in this case the sample, into raw data, the output. Raw data for potency assays consists of a collection of the measurements resulting from the challenge of a biological system with increasing concentrations of your drug or your product. This results in an increase in biological activity that eventually plateaus.

We then perform logistic regression on the raw data to fit a model to the raw data, which results in dose-response relationships. These dose-response relationships contain parameter information which allows us to compare the sample-mediated dose-response curve to a reference standard curve here in red. We then calculate a horizontal shift, and this provides us with a relative potency estimate that we express either as a percentage or as a ratio. Finally, this relative potency value or a geometric mean of a specific number of relative potency values provides us with information.

To recapitulate the preamble, bioassays which have different names, biopotency assays, biological assays, they conceptually represent a process. Consequently, a great number of generic process development, troubleshooting, and control tools can be applied by JMP. Another thing to remember is that these assays typically exhibit great variability or greater than do chemically-based tests.

We get to design and development. When there is a requirement for a bioassay, we start with identifying and selecting the potentially critical variables. For this, we use prior knowledge, which is our knowledge base, and we use scientific rationale, common sense, we take into consideration practical limitations, and we perform initial experimentation. To organize your thoughts around all these considerations, you could use Ishikawa diagram for example. This is a functionality that JMP provides.

Then we start to establish our design space, which in analytical terms is called MODR or method operable design region. We start to find the optimal settings using design of experiments to generate curves that provide us with accurate and precise relative potency results.

This is an outcome of a design multivariate experiment, where we varied target and effector cells, or the number of target and effector cells. Visual inspection suggests that the red curves do not necessarily meet our requirements, so we have to choose one of the black curves. To make an objective decision about which dose-response curve is the best, we have to rely on statistical analysis.

When you're familiar with these response surface model designs, you will recognize the contour plans, the prediction profiler, and the response surface plot. The prediction profiler shows the factors of interest, shows the level range, it shows our responses of interest, it shows the most significant factors in red, and it shows the desired outcomes. In this case, we want this response to be as low as possible, this one to be on target, and this one to be as high as possible.

To fit a model to our data, we use 4 Parameter Logistic Regression. This can be done in JMP, although we use dedicated GMP-validated software. Note that JMP uses different terminology for what we call the slope parameter and the EC50 or IC50. JMP uses the terms growth rate and inflection point. Growth rate for slope parameter and inflection point for EC50. JMP also provides us with all other relevant statistics. For example, the F-Prob for a parallelism test, and of course, our relative potency values, and JMP provides them relative to a reference standard as erasure.

Next is performance qualification or validation where we demonstrate fitness for intended purpose. I will focus on two main validation performance characteristics which are relative accuracy and intermediate precision.

Validation itself is the act of demonstrating that a procedure is suitable for intended purpose. I use qualification and validation interchangeably because USP<1220> states that all activities that confirm their suitability for intended purpose can be considered under the validation umbrella.

This is an example of a validation design. In fact, this design is from the United States Philanthropy at 1033 for bioassays. We have two operators. We'll measure a range of nominal potencies between 50% and 200% over a number of runs or days on different 96-well plates in the same run. In addition, the USP adds different media lots, so that is another factor to be considered.

This is a graphical representation of the relative potency measurements resulting from this validation design. We have different within run measurement measurements, so the same sample on different plates or the same nominal concentration on different plates. We have different between-run measurements, so these are the different runs. This is all per nominal relative potency. Also, we have two different operators, so the blue and the red ones denominate different operators. From these data, we can estimate accuracy and precision.

JMP allows us to calculate and using a mixed effects model with restricted maximum likelihood estimation so we can estimate our within-run and between-run variance components. The sum of these Variance Components is what we call intermediate precision. This allows us to establish an optimized replication strategy, which is something we'll get into in a later slide.

Using a mixed effects model, we can also estimate variance components for other validation study factors. In this case, it was the medium knots that were also varied. JMP allows us to plot this in a Pareto chart, which enables us to immediately identify that this factor is contributing most to the variance.

We can use our variance component adjustment to reduce our intermediate precision by repeating the measurement of the same sample either in different runs, and/or in different place in the same run. This graph shows how intermediate precision is reduced when we use this strategy. Increasing the number of repeated measurements on different plates in the same run also reduces IP, but less pronounced. Here we have the different runs. This reduces our intermediate precision. If you look at the different plates in the same run, we see also a reduction of intermediate precision

This also shows us that reducing variants in this way is finite. We go from about 15 to somewhat over 5%. At some point, this is just not feasible anymore. In black, we have the number of repeats. We have 15 repeats to reduce our variants from 15–5%, and this is just practically not feasible anymore.

The second important validation parameter is relative accuracy. Here, we expect to find relative potency measurements that on average are in line with the nominal potency of the samples. What we calculate is a deviation from these averages. We expect that after regression, the results are close to unit line, which means that we have an intercept of zero and a slope of one.

We can see visually that the regression suggests an overestimation at higher nominal potency. JMP also provides us with a regression equation where we can verify whether our intercept is indeed approximating zero and our slope is indeed approximating one.

This is yet another way of plotting the same data. It confirms that we have an overestimation or somewhat of an overestimation probably at the high nominal potencies. This plot, again, the same data includes prediction intervals, which allow us to assess individual measurements in terms of being potential outliers.

In addition, this plot provides us with variance estimates per log nominal potency, which should be in line with each other and on average represent total intermediate precision. It should be in line with each other because we want constant variance for the mixed effect model.

Finally, the validation parameters are held against predefined acceptance limits here with the dotted line. In this case, the 200%, the highest nominal potency, would not pass validation. Which is something that we could have expected already from the visual inspection that we saw already on.

We also perform validations, and qualifications as a tool to assess operator performance or in the training process of operators. We do repeated qualification and validation exercises. To facilitate this, we have written scripts that help us with data analysis.

Next is performance qualification. How do we set validation target acceptance criteria associated with the performance of our methods? One way of approaching this is the use of the probability of being out of spec. This graph shows for a method with given capability that when your manufacturing process is on target, so hopefully it is 100% of your reference standard, the probability of an OS measurement, which you should not get at this point, is very, very low. That is what we want. When your process mean changes, when it moves off target, the probability of obtaining an out-of-specification result increases, which is what you want indeed.

This contour plot contains probabilities of obtaining an OS result in relation to the specification of choice. Here, the example is 70–143% when the process is on target. The white area represents a measurement with a probability of being out of spec below 5%. That is a low probability of an unwanted, false, out-of-specification result for your specification of choice when the process is on target.

A more stringent specification reduces this area significantly. The black lines represent the capability of the method, which is a combination of your measurement bias of the method and the intermediate precision of the method. Here we have the numerical values of 12 for the bias, and 8% for the intermediate precision.

For this specification, this method is not capable. The black lines, they cross outside the white area. For this not so stringent specification, the method is capable. The lines, the black lines, they cross within the white area. This plot provides us with some flexibility in choosing ultimate post hoc relative bias and intermediate precision combinations that fall in this area instead of holding validation parameters against predefined acceptance criteria. The prediction profiler provides us with the same information, essentially. It's just JMP allowing us to assess data in a different way.

Finally, we have ongoing procedure performance verification, where we verify and control that the demonstrated fitness for intended purpose is maintained throughout the life cycle of the method.

OPPV is an ongoing exercise for which we have written scripts which result in this dashboard layout. On the left in the column switcher, we have added several dose-response associated parameters that we process control and compare to acceptance limits. That is, our control limits in red should be within our acceptance limits, which in this case, it's just an upper limit in blue.

CUSUM charts help us to identify early whether our control sample is starting to drift away from its target value. In our case, that is 100%. Which is a prompt for investigating potential loss of fitness for intended purpose.

What I did not discuss due to time constraints, but would like to mention to conclude is that we use JMP for other relevant exercises associated with the bioassay life cycle, such as reference standard bridging, which is a complex and hot topic in the bioassay world, and root cause investigations, which can be quite complex for these multifactorial methods.

Finally, I would like to acknowledge my group, Dennie, Geert, Jake, Karlijn, Lisette, Pim, Rob, Sanne, because these are the people who do all the work and work with JMP to generate these nice data.

Presenter

Skill level

Intermediate
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎12-15-2024 08:23 AM by Community Manager Community Manager | Updated on ‎03-18-2025 01:12 PM

Relative potency assays are critical in evaluating the biological activity of drug products throughout the lifecycle of biopharmaceuticals. Managing variability, optimizing assay conditions, and ensuring consistent performance are essential but challenging.

This presentation explores how JMP supports developing, validating, and monitoring relative potency assays in CMC (chemistry, manufacturing, and controls).

By integrating data visualization, statistical analysis, modeling tools (such as logistic regression, and response surface methodology for establishing optimal assay conditions), and mixed-effects models (for estimating intermediate precision and relevant replication strategies), JMP enables robust assay development, procedure validation, and ongoing performance monitoring. The automation and scripting capabilities within JMP further streamline repetitive data analyses, facilitate method/operator performance assessments, and ongoing procedure performance verification (OPPV) of bioassays in a regulated environment.

 

 

The title of my presentation is The Use of JMP During the Lifecycle of a Relative Potency Asset for CMC. I work at Byondis. We are a company in Nijmegen in the east of the Netherlands. We are about 300–350 people. What we do is we create targeted medicines, targeted at mostly intractable cancers. The products we make are antibody drug, conjugate, and monoclonal antibodies.

Monoclonal antibodies are Y-shaped, complex macromolecular proteins. They are produced from recombinant cell culture expression systems, and these cells are genetically engineered to contain the necessary information for producing the desired product. This is a representation of an antibody-drug conjugate, where again, we have the Y-shaped, complex macromolecular structure. The drug being attached via linker being a chemical moiety.

This is a high-level overview of a biotech product lifecycle. We have research, we have development, and hopefully, we have commercialization. Our research colleagues are responsible for developing a so-called clinical candidate, which is a promising compound that has shown sufficient efficacy, and safety in preclinical testing. We find it sufficient to advance it into clinical trials.

The development part is what we call CMC, which stands for chemistry manufacturing and control. Here we develop our production process. We want this production process to result in a safe and hopefully, in humans efficacious drug, and we want to be of constant quality.

There are two groups mainly responsible for developing this process, which are the Upstream Processing Department, and the Downstream Processing Department. USP focuses on optimizing the conditions for the growth of the engine itself in bioreactors and they also are responsible for scaling up the production process. DSP handles the purification and isolation of the biological product from the cell culture and ensures that the final product is pure, is concentrated, and free from contaminants.

To develop their respective processes, these groups need to make informed decisions that are data-driven, scientifically sound and risk-based. A large part of the information they require is provided by the analytical development group. My group is part of the analytical development group and is called the potency assay group, or the group that develops functional assays.

Potency assays quantify by biological activity that's induced by the product upon interaction with the target biological substrate. We preferably want it to be mimicking the biological activity in the clinical situation.

This is the life cycle of a potency assay, or if you like, any analytical method. Where we design and develop the method, we demonstrate fitness for intended purpose, and we verify and control that this demonstrated fitness for intended purpose is maintained throughout the life cycle of the method.

As a preamble to design and development of the potency assay, we have to consider the following. An analytical method or analytical methods in general are essentially a process. Where the analytical process converts the input, in this case the sample, into raw data, the output. Raw data for potency assays consists of a collection of the measurements resulting from the challenge of a biological system with increasing concentrations of your drug or your product. This results in an increase in biological activity that eventually plateaus.

We then perform logistic regression on the raw data to fit a model to the raw data, which results in dose-response relationships. These dose-response relationships contain parameter information which allows us to compare the sample-mediated dose-response curve to a reference standard curve here in red. We then calculate a horizontal shift, and this provides us with a relative potency estimate that we express either as a percentage or as a ratio. Finally, this relative potency value or a geometric mean of a specific number of relative potency values provides us with information.

To recapitulate the preamble, bioassays which have different names, biopotency assays, biological assays, they conceptually represent a process. Consequently, a great number of generic process development, troubleshooting, and control tools can be applied by JMP. Another thing to remember is that these assays typically exhibit great variability or greater than do chemically-based tests.

We get to design and development. When there is a requirement for a bioassay, we start with identifying and selecting the potentially critical variables. For this, we use prior knowledge, which is our knowledge base, and we use scientific rationale, common sense, we take into consideration practical limitations, and we perform initial experimentation. To organize your thoughts around all these considerations, you could use Ishikawa diagram for example. This is a functionality that JMP provides.

Then we start to establish our design space, which in analytical terms is called MODR or method operable design region. We start to find the optimal settings using design of experiments to generate curves that provide us with accurate and precise relative potency results.

This is an outcome of a design multivariate experiment, where we varied target and effector cells, or the number of target and effector cells. Visual inspection suggests that the red curves do not necessarily meet our requirements, so we have to choose one of the black curves. To make an objective decision about which dose-response curve is the best, we have to rely on statistical analysis.

When you're familiar with these response surface model designs, you will recognize the contour plans, the prediction profiler, and the response surface plot. The prediction profiler shows the factors of interest, shows the level range, it shows our responses of interest, it shows the most significant factors in red, and it shows the desired outcomes. In this case, we want this response to be as low as possible, this one to be on target, and this one to be as high as possible.

To fit a model to our data, we use 4 Parameter Logistic Regression. This can be done in JMP, although we use dedicated GMP-validated software. Note that JMP uses different terminology for what we call the slope parameter and the EC50 or IC50. JMP uses the terms growth rate and inflection point. Growth rate for slope parameter and inflection point for EC50. JMP also provides us with all other relevant statistics. For example, the F-Prob for a parallelism test, and of course, our relative potency values, and JMP provides them relative to a reference standard as erasure.

Next is performance qualification or validation where we demonstrate fitness for intended purpose. I will focus on two main validation performance characteristics which are relative accuracy and intermediate precision.

Validation itself is the act of demonstrating that a procedure is suitable for intended purpose. I use qualification and validation interchangeably because USP<1220> states that all activities that confirm their suitability for intended purpose can be considered under the validation umbrella.

This is an example of a validation design. In fact, this design is from the United States Philanthropy at 1033 for bioassays. We have two operators. We'll measure a range of nominal potencies between 50% and 200% over a number of runs or days on different 96-well plates in the same run. In addition, the USP adds different media lots, so that is another factor to be considered.

This is a graphical representation of the relative potency measurements resulting from this validation design. We have different within run measurement measurements, so the same sample on different plates or the same nominal concentration on different plates. We have different between-run measurements, so these are the different runs. This is all per nominal relative potency. Also, we have two different operators, so the blue and the red ones denominate different operators. From these data, we can estimate accuracy and precision.

JMP allows us to calculate and using a mixed effects model with restricted maximum likelihood estimation so we can estimate our within-run and between-run variance components. The sum of these Variance Components is what we call intermediate precision. This allows us to establish an optimized replication strategy, which is something we'll get into in a later slide.

Using a mixed effects model, we can also estimate variance components for other validation study factors. In this case, it was the medium knots that were also varied. JMP allows us to plot this in a Pareto chart, which enables us to immediately identify that this factor is contributing most to the variance.

We can use our variance component adjustment to reduce our intermediate precision by repeating the measurement of the same sample either in different runs, and/or in different place in the same run. This graph shows how intermediate precision is reduced when we use this strategy. Increasing the number of repeated measurements on different plates in the same run also reduces IP, but less pronounced. Here we have the different runs. This reduces our intermediate precision. If you look at the different plates in the same run, we see also a reduction of intermediate precision

This also shows us that reducing variants in this way is finite. We go from about 15 to somewhat over 5%. At some point, this is just not feasible anymore. In black, we have the number of repeats. We have 15 repeats to reduce our variants from 15–5%, and this is just practically not feasible anymore.

The second important validation parameter is relative accuracy. Here, we expect to find relative potency measurements that on average are in line with the nominal potency of the samples. What we calculate is a deviation from these averages. We expect that after regression, the results are close to unit line, which means that we have an intercept of zero and a slope of one.

We can see visually that the regression suggests an overestimation at higher nominal potency. JMP also provides us with a regression equation where we can verify whether our intercept is indeed approximating zero and our slope is indeed approximating one.

This is yet another way of plotting the same data. It confirms that we have an overestimation or somewhat of an overestimation probably at the high nominal potencies. This plot, again, the same data includes prediction intervals, which allow us to assess individual measurements in terms of being potential outliers.

In addition, this plot provides us with variance estimates per log nominal potency, which should be in line with each other and on average represent total intermediate precision. It should be in line with each other because we want constant variance for the mixed effect model.

Finally, the validation parameters are held against predefined acceptance limits here with the dotted line. In this case, the 200%, the highest nominal potency, would not pass validation. Which is something that we could have expected already from the visual inspection that we saw already on.

We also perform validations, and qualifications as a tool to assess operator performance or in the training process of operators. We do repeated qualification and validation exercises. To facilitate this, we have written scripts that help us with data analysis.

Next is performance qualification. How do we set validation target acceptance criteria associated with the performance of our methods? One way of approaching this is the use of the probability of being out of spec. This graph shows for a method with given capability that when your manufacturing process is on target, so hopefully it is 100% of your reference standard, the probability of an OS measurement, which you should not get at this point, is very, very low. That is what we want. When your process mean changes, when it moves off target, the probability of obtaining an out-of-specification result increases, which is what you want indeed.

This contour plot contains probabilities of obtaining an OS result in relation to the specification of choice. Here, the example is 70–143% when the process is on target. The white area represents a measurement with a probability of being out of spec below 5%. That is a low probability of an unwanted, false, out-of-specification result for your specification of choice when the process is on target.

A more stringent specification reduces this area significantly. The black lines represent the capability of the method, which is a combination of your measurement bias of the method and the intermediate precision of the method. Here we have the numerical values of 12 for the bias, and 8% for the intermediate precision.

For this specification, this method is not capable. The black lines, they cross outside the white area. For this not so stringent specification, the method is capable. The lines, the black lines, they cross within the white area. This plot provides us with some flexibility in choosing ultimate post hoc relative bias and intermediate precision combinations that fall in this area instead of holding validation parameters against predefined acceptance criteria. The prediction profiler provides us with the same information, essentially. It's just JMP allowing us to assess data in a different way.

Finally, we have ongoing procedure performance verification, where we verify and control that the demonstrated fitness for intended purpose is maintained throughout the life cycle of the method.

OPPV is an ongoing exercise for which we have written scripts which result in this dashboard layout. On the left in the column switcher, we have added several dose-response associated parameters that we process control and compare to acceptance limits. That is, our control limits in red should be within our acceptance limits, which in this case, it's just an upper limit in blue.

CUSUM charts help us to identify early whether our control sample is starting to drift away from its target value. In our case, that is 100%. Which is a prompt for investigating potential loss of fitness for intended purpose.

What I did not discuss due to time constraints, but would like to mention to conclude is that we use JMP for other relevant exercises associated with the bioassay life cycle, such as reference standard bridging, which is a complex and hot topic in the bioassay world, and root cause investigations, which can be quite complex for these multifactorial methods.

Finally, I would like to acknowledge my group, Dennie, Geert, Jake, Karlijn, Lisette, Pim, Rob, Sanne, because these are the people who do all the work and work with JMP to generate these nice data.



Start:
Thu, Mar 13, 2025 05:00 AM EDT
End:
Thu, Mar 13, 2025 05:45 AM EDT
Salon 5-London
Attachments
0 Kudos