Choose Language Hide Translation Bar

Optimisation of a milling process to match the innovator drug product quality attributes (2021-EU-30MP-749)

Christel Kronig, Senior Analytical Scientist, Dr Reddy's Laboratories EU Ltd
Andrea Sekulovic, Scientist Formulation Process Development, Dr. Reddy's Laboratories Ltd.

 

A key aspect of the development of generic drugs is that sameness to the innovator product must be demonstrated. In this study, the objective was optimisation of a milling process to generate a drug product with particle size attributes all in the same range as that of the innovator product.

Two different modelling techniques were evaluated to model the particle size attributes over time: Functional Data Explorer versus Fit Curve. The curve parameters or Functional Principle Components (FPC)  were then modeled as a function of the process parameters. Finally, the models obtained were used to predict the particle size attributes over time and identify combinations of process parameters likely to generate a drug product of the desired quality. A verification experiment was performed which resulted in a product with particle size attributes matching the requirements.

 

 

Auto-generated transcript...

 


Speaker

Transcript

Christel Kronig Well hi everyone, and thank you for joining this talk. My name is Christel Kronig and I'm a scientist at Dr. Reddy's in Cambridge in the UK. I
  helped with the data analysis on this project. My colleague, who also worked on this study, is Andrea Sekulovic and she's based at Dr Reddy's in the Netherlands and she's a formulation scientist.
  And so today I'm going to talk to you about the optimization of a milling process to match the drug product quality attributes of the innovator.
  And so the first part of the presentation will be will be talking to you about the process development, of the objective of the project and what the study involved and
  and what modeling options we considered, and then the second part, I will look at the workflow that we developed in JMP for this study.
  Okay, so and the objective of this study was really to understand the relationship between the process parameter for milling process
  and any quality attributes for the drug product that we're making, so our responses. So we wanted to obtain a predictive model that we could use for scale up and also to optimize the conditions that we would need for this process.
  So there were several responses that we had to examine as part of the study and they are particle size attributes.
  So we looked at micron and span and by studying the innovator product, we also knew which...what the range needed to be to make sure we had a product that was within the specification and similar to the innovator product.
  So the profile of the responses would vary, based on the milling time and the milling process parameters to find the optimum conditions we needed to optimize these parameters to make sure that we would have product in the design range that met the specification requirement.
  milling speed, the flow, the size, we also had a loading parameter, excipient percent, API concentration and, of course, the time for milling.
  So the process development was...
  we started with making some initial batches. We didn't start directly with doing a design of experiments. We
  looked at the data that needed to be collected with those first few batches that were made and we looked at modeling options. And then the team
  in the Netherlands decided to do an I-optimal design, so they looked at three parameters and time and perform some initial modeling,
  after those first data sets, decided to add an extra three additional parameters. So we documented this design, added 10 additional experiments. So the final data set that we
  looked at for the optimization had 38 batches that included six parameters and time, and this is what we use for the optimization for this study. So after that we then made some confirmation batches to check if the
  new settings would generate products that meet the requirements.
  Okay, so what modeling options did we consider
  for this project? So the default option for the team was really to model the response at selected time points.
  And it's easy to do that in the standard software and the disadvantage of course is it's not possible to predict the outcome
  at other time points and the optimum may be in between specific time points. So modeling of the profile over time enables greater understanding of how the process parameters affect profile of the response over time,
  so you're more likely to reach an optimum. But for this, of course, you need more advanced modeling capabilities.
  And so we looked at first fit curve, which is available in JMP.
  And for our initial data set that works quite well, so this is one of the functions, this Biexponential 4,
  that we used appeared to be a good fit for most of the batches that we've made initially when you know modeling some of the responses and there's an example on the right of how the this type of curve fitted our data quite well.
  And one of the issues we encountered is that it didn't work for all the batches so, for example, in some cases we didn't have enough time points. On the left there's not enough time points
  to see...to fit that model. You would need a minimum five, for example, for this particular model. On the right we have one way we have enough time point but that particular type of curve doesn't fit this particular model very well.
  So it, it was difficult using this for the larger data set that we had, and so, for that reason we didn't continue with that approach.
  And we also looked at Functional Data Explorer.
  So you can see here, looking at this platform for 10 different batches, you can see those on the left and how the profile was fitted...
  profile over time, so there was no issue with lack of data points here.
  This gave a good view of differences between batches. So, for example,
  in green on the graph in the right hand side, you can see the fast batch is in this part of the space and the slow batch appears in a different part of the space and that highlights the difference
  between those profiles which is perhaps not so obvious when you look at the graph on the left hand side.
  So what you get with the Functional Data Explorer is
  it breaks down the profile over time into different principal components, so the FPC values that we see here,
  and this is what you then use for the next part of the workflow. So this is available in JMP Pro; I forgot to say this. So, first, before I show you what it looks like in JMP, I just wanted to take you through
  what the workflow looks like, what we are trying to do. So we're starting with...apologies for that...we're starting with
  a timetable which has this critical quality attributes, responses, we have our time points and then for each of those time points,
  and yes, the critical quality attributes, and also the process parameters that we'll use to generate the batches.
  So you then take that data and you use FDE to get a model, and this will mean that you can express your CQAs, your responses, as a function of time and those functional principal components FBCs.
  So the output of that will be that you then get a summary table. For each batch you have CPPs and the FPC, functional principal component, and then you can apply standard modeling
  to then get the predictive model to express those FPCs as a function of your process parameters.
  And then the final step is to import that model back into the original table, so you have then...you can express then your responses as a function of time and your process parameters,
  which allows you then to use this to find a optimum conditions and to make confirmation batches using those models.
  And so, and what I've got to say is that for modeling I'll show you this in JMP, we use this model validation strategy for designed experiments, that's something I was presented
  three years ago at Discovery Summit. I won't go into the details of that, but that's there for reference if you want to look that up.
  So okay so I'll now take you through what this workflow looks like in JMP and I'll switch over to JMP so...just find my
  JMP journal.
  Okay, going to move this here.
  Okay, so we'll first start with the
  original data table, so we have a number of batches that were made, so 38 batches. Each batch has
  a number of time points. For example, the first batch here, we have 10 different time points. You have your six columns which are the process parameters.
  And then we have two responses for each of those
  data points. So first thing to show you is to look at the data table and visualize that data set, what what that looks like. So I have a script here using Graph Builder, which gives very quickly a good overview of
  what the data looks like. So, for example, you have one of the responses
  with the milling time here at the bottom, and you can straightaway see that the profiles are quite different, depending on the batches, some are steeper than the others, some are very shallow and also some were collected over a longer or shorter period of time.
  So we'll now look at the profile in a bit more detail and look at the two modeling approach that we
  talked about previously in the slides.
  So the first one is using the fit curve, so that's under the analyze platform, under specialized modeling.
  So if I select fit curve.
  I'm going to pick my milling time with my X and one of my responses, and then I'm going to select batch. So I'm going to do that for each batch. Click OK. I then have a...for each batch I have a profile, the response over time.
  And then I'm going to
  use one of the models that is stored, you know, that JMP has already, and this is this biexponential 4P, which I
  talked about before, so I won't go through the differences, you know the different models, but I know this is one of the ones that we looked at previously for our data. So for example, for this batch it fits okay, but not not brilliantly for some of the data points.
  For this one, there's not enough time points so you don't,
  you know, you can't really use that, but for some of the batches,
  so this one, for example, that fitted really well, so you had the four coefficient, you know, estimates and they were statistically significant.
  So what you would do, then, is just export all that data and get a summary table in the same way that we're going to do for the Functional Data Explorer but... So I won't do any more using the fit curve in this demo, but the same approach could be used.
  So let's go to the Functional Data Explorer, so it's also in specialized modeling in JMP Pro.
  And I'm going to select my response and my milling time and also my ID is my batch number.
  So I'm going to not explain again in a lot of detail, bearing in...bearing in mind the time we have,
  what modeling to use for this type of data. I know B-Spline works quite well with my data set, so this is what I'm going to use.
  And as you can see JMP fitted a model for each of the batches that we have in our data table, seems to fit quite well.
  So if I look further down, you can see that it's broken the profile into different components.
  And you can see on the graph here where the batches are in spec. Now for this particular response, FPC1 here the top actually explains 96% of the variation in the data, which is pretty good so we wouldn't need, in this case, to
  look at FPC2 and FPC3. In this instance, we probably only need to keep the first one. It wouldn't be the case for, for example, for the other response that we have in this data set, but here I'm going to restrict the number of FPCs to one.
  And then I'm going to
  export my summary data, so when I click on save summary, I have a new table appear
  in JMP. This is so, you can see, 38 rows, so I have one row per batch, I have my batch number, I have my FPC value, and I have some prediction formula.
  So I'm going to close this table now and use one I've prepared earlier, which has got the
  FPC for all the responses that I want to look at in my data set.
  Let me do that and switch back to my journal. Okay so we're now at step two, where I have a summary table that I've prepared and I have the columns for each of my responses for my FPCs.
  So the first thing I want to do here is to use this
  validation
  technique for DOE, where I'm going to create extra rows, which I'm going to use for the validation report. So I gave you the reference in the slides if you want to understand more about that technique, you can do that. So we're then going to
  fit a model for one of my FPCs.
  We shall want to examine as a function of some of my process parameters.
  And click on run.
  And I'm going to use a stopping rule which is minimum AICc. Click on Go.
  So JMP has found several process parameters and also interactions that it's found important. You can see the R squared and R squared adjusted
  look good and you get also by using these...adding this extra R square validation, which also indicates that it's looking good and the model hasn't over fitted, for example. So I'm going to click on make model
  and
  then
  I have a model where I can see that milling speed and size was important and some of the interaction terms also important, so what I need to do next is save the prediction formula.
  And I can also save the script for when I want to do that again later on and save that to my data table.
  So I'm going to close this window. So you would need to do this exercise of fitting the model for each of the FPC values that you have for your data table.
  And the last thing that we need to do is to use our prediction formula, so this is the formula I have just saved for FPC1.
  And this is the prediction formula that came from FDE earlier on. So if I right click, this is what it looks like. So I have the FPC1 for each batch
  and then I have those extra columns which are functions of time. So what I need to do now is, instead of FPC1, the actual value, I'm going to use my prediction formula,
  which now is a function of my process parameters. Just literally replace that in the formula here. I'm going to click apply.
  OK so again, you will need to do that for any of the models that you generate and then save those in your data table.
  So I'm going to close this and we'll go on to the next step in the journal.
  Okay, so what you need to do, then, is import back these formulas that we've just saved into the original table, so we can see how the response vary as a function of time and the process parameters. So I'm going to open my time data table and I'm going to open my summary table with my model.
  So I'm going to select those columns with the models and I want to copy those columns into
  my original table, which I've now lost.
  There it is. So I'm going to click paste columns now.
  And I just want to double check that it's copied my formulas across.
  So, for example, if I go there, yes, my formula has been copied. So what what I now have is
  the model which predicts the response as a function of the time and the process parameters. So I'm again not going to save this but use the final table, which has all the models that I need to then do the optimization.
  So this is the last step.
  So I now have again my process parameters, my milling time, the two responses and then the prediction formula, which would have come across from the summary table. So I'm going to use these two columns now and with the profiler.
  And what I need to do also is look at factor grade, so the team wanted to set the API concentration, excipient and size to specific settings that they wanted to find the optimal conditions for. So if you lock those settings, click OK.
  And then you can use the desirability functions to set the specification limit, which I have done already.
  And then maximize desirability and then these would
  get JMP to find the best condition to provide product that would meet the requirement that were set, that we wanted. S this is a technique that we used.
  So I'm going to come out of JMP now and go back to the slide for just to show you what the outcome of this workflow was.
  So I'm going to switch back to screen. Okay. So out of the 38 batches that we made, that the team in Holland made, there were only four where we had one time point,
  at least one time point, where both of those results were within the range that was set. But you can see in the table here on the right, that for the span response that in all four cases, this was very close to the
  upper specification limit, and the team were really interested in finding conditions that would generate product where the that particular response was, you know, well within the range
  whilst maintaining the other response, also within the target range.
  And yeah, we had a great result. The the model with conditions that were selected predicted the span of 1.63. The actual result for this batch was 1.78 and that was the lowest
  span that was achieved of all the batches made. So the team were really happy with that result. So despite the slight underestimation of the model, this was still a pretty good result. And you can see in the screen here where
  this batch appears in green and it's completely to the left of all the other batches.
  And this is why, you know, we were able to achieve a good result. I guess it was, you know, using a slightly different combination of parameter that enabled this result to be achieved.
  So just a conclusion really that the Functional Data Explorer in JMP Pro worked really well for this application. It yielded a good predictive model and the best result to date.
  We couldn't make use of the fit curve approach so well, despite the initial promising
  results that we'd seen at the beginning of the study, and we couldn't use that for the whole data set, but
  nevertheless, the team was convinced of the value of looking at profiles over time and the value of this approach. And of course you can apply this to other types of data, for example in formulation, you know, in vitro release or in API development reaction conversion, for example.
  And so, this is the end of the presentation, thank you to colleagues in the Netherlands that were involved with milling lots of batches, and
  to Andrea who's the co author on this presentation for great teamwork. And we had good interactions between both sides so that led to some great results. So thank you for listening and hope you enjoyed the presentation and enjoy the rest of Discovery.