Choose Language Hide Translation Bar
billi
Level V

A New Resolve to Dissolve: Model Drug Dissolution with Functional DOE (2021-US-30MP-916)

Level: Intermediate

 

Amanjot Kaur, Statistician, Perrigo
Rob Lievense, Senior Systems Engineer, JMP

 

Formulation scientists put forth significant hours of work attempting to find an extended release formulation that matches drug release targets. For generic (ANDA) products, the primary objective is to match critical quality attributes (including dissolution over time) to the reference listed drug (RLD). This paper illustrates how functional DOE is an extremely robust and easy-to-use technique for optimizing to a target dissolution profile rapidly with fewer resources.

Dissolution data is collected over time during development and in-depth analyses are required to understand the effect of the formulation/process variables on the product performance. Regression models for specific time points (e.g., 1, 2, 6, and 12 hours) have been typically used to correlate the release responses with the formulation/process variables; however, such analyses violate the assumption of independence for the responses. There is great need for robust statistical tools used to determine the levels of inputs needed to get the closest profile of the developmental product to the reference drug. Functional DOE in JMP for model dissolution is a new and critical tool to use within a drug development program.

 

 

Auto-generated transcript...

 


Speaker

Transcript

Bill Worley or.
  Is.
  It is.
  Thank you so much for tuning into today to watch Rob and I's Discovery presentation. Today we are discussing the topic, a new resolve to dissolve,
  which is modeling drug resolution data using functional DOE. First of all let me introduce myself. My name is Amanjot Kaur. I am statistician with Perrigo Company.
  And my co author, who's joining me here, is Rob Lievense. First of all, we'll talk about...I'll give you a little bit introduction about the topic that we are discussing today,
  followed by introduction to the dissolution testing and why it is important. And then we'll discuss and compare the previous methods that we used
  for finding the best input to match the target dissolution profile and compared it with functional DOE that is now available in JMP Pro.
  Like I mentioned, I work as a statistician with Perrigo, which is a pharmaceutical company and very, very good...has a large share...market share of over the counter drugs and generic drugs.
  We regularly deal with the solid dosage form, which are developed into new products.
  Formulation scientists in these pharmaceutical companies, they put in a lot...a lot of time and effort in attempting to find candidate formulation to match target dissolution profile of an extended release
  tablets. These days extended release tablets are really...they are more...
  more in demand, as compared to immediate release, as you can see in this slide, that if you're taking an immediate release tablet you're in 24 hours, you will be taking
  eight to 10 tablets, and as compared to extended release tablets, you will be just taking two tablets in 24 hours. So the extended release tablets are preferred over immediate
  release tablets. When I say target dissolution profile that can be a currently marketed drug known as orallly or reference listed drug, or it can be a batch that is used for clinical study, known as a bio batch as well.
  And the data that is collected during the formulation development of generic products, it is all submitted to FDA in NDA (abbreviated for new drug application), which is really common in my work.
  The main primary product objective during these this formulation development is to match all the critical quality attributes, including dissolution over time to the ???.
  So let's take a minute and see...learn little bit about dissolution testing and why it is important.
  When we take any medication what happens in the human body is that solid dosage form, those will release the active drug ingredient, and it will process...and will process the drug out of the body in a given rate. This is shown in the clinical studies.
  That a peak results when the maximum amount of drug is present in the blood, how long that those are sustained and the eventual decline of the drug in the bloodstream as it is excreted out of the body.
  The laboratory methods that are utilized to monitor the quality of the product, they do not have the same mechanism, but they try to replicate as much as
  to the human body. And there are multiple techniques that are used, however, all of...all them typically involve the release of the active drug ingredient
  in media, and that is measured as a percentage of the total dose. In the extended release formulas, they utilize formulation scientists, they utilize materials and
  processing methods to assume that a specific amount of drug is released quickly enough to be effective, with the slow release required over the time to maintain the drug level against the rate of the excretion.
  Now that we know about dissolution testing, now we can take a look at the methods that we used previously to analyze this dissolution data.
  Similarity profile...similarity of profiles that is...that was done by graphing average results of candidate batches to the product. We usually use two methods. The first one is F2 similarity criteria,
  which is used to compile the sum of square differences of percent released in media of multiple...for multiple time points.
  Scientists typically they rely upon their principles and experience to create trial batches that will hopefully be similar to the target, so it's hit and trial method.
  In this method a value of 50, or higher than 50, that is desirable to indicate that batches...the batches are, at most, plus or minus 10% different from the target profile at the same time points.
  The second approach that we use is more advanced approach that came about through utilization of quality by design in pharmaceutical industry. That is multivariate least squares model that comes from designed experiments. So,
  as you may know, least squares methods create an equation for how each input...input influences the dissolution output
  for designated time points. So for extended release formulas we typically look at one hour, two hours, six hours and then 12 hour time release.
  The prediction profiler that is available in JMP that provides the functionality to determine the input settings to obtain the comparison of the best results for all the solution time points.
  So now question is, if we have these capabilities, why we are looking at the new approach?
  The problem with these methods that we have today is that F2 similarity trials and multivariate least squares, both methods they treat the
  time points of the dissolution profile as independent outputs, and we know that the release of those at one hour that will affect results at the later time points as well.
  That's why we need a new approach, and, secondly, the functional DOE, it will treat all the time points as dependent time points, and it is an extremely robust and easy-to-use technique to optimize our target dissolution profile rapidly just with few resources.
  Let me just quickly show you in one example of development project using multivariate least squares regression method. So, as you can see here in this data table, this is...this was a DOE created for one project, and we have 12
  batches (12 batches?) 12 batches in here. The first, the main compression force, polymer A and B there, these are the three input factors, and
  different time points at 60 minutes, 120 minutes, 240 and 360. We have all the time points here. And, if we look at the least squares fit here, you can see, our main effects and interaction, they all are pretty significant and if you just scroll down to the
  to the end of the report, you'll see a prediction profiler will...where it will give you...where we have all the setting....we have already set goals, what we want and we can maximize our
  desirability and it will give you a setting showing you that this is the desired setting that you need to get your desired profile or a match to the target,
  if you want to say. So this is what we get in least square fit. My former colleague, Rob Lievense, will show you functional DOE, which we believe is a way...much better way to optimize the formulation or process.
  Thanks AJ. You did a really good job of explaining all the work that we did changing to a quality by design culture and getting to the multivariate models.
  Now we're ready for the next step. I'm Rob Lievense and I am a senior systems engineer at JMP but I used to be in the pharmaceutical industry for over 10 years and wrote a book on QbD and using JMP.
  So I want to show you this topic of functional data exploration, specifically functional data using a DOE.
  This works really well for dissolution data.
  For functional data analysis, we need to have the table in stacked form, that works best. So I have a minutes column, I have the dissolve for the six samples at each time point, I have the batch and I have my process inputs.
  Now what I'm going to want to do is take a look at this first
  as dissolution by minutes. So one of the things I can see is, here's my goal, here's all the things that I'm trying with my experimentation. It really becomes obvious that
  these are dependant curves. Whatever happens in early time points has influence on later time points, so it really is silly to be able to try to model this by just pulling in the time points and treating them as independent.
  We can utilize more of the data that comes from the apparatus in this way.
  This helps us develop the most robust function; the more pulls we have, the better it's going to be.
  So I'm going to run my functional data exploration here.
  We have
  the amount dissolved.
  We have to put in what we have across X, if it's not in order...row order, but I have minutes, I'm going to throw it in there.
  Batch is our experimental ID.
  And then these inputs that change as part of our DOE, we're going to put in there as supplementary variables.
  What JMP does is it looks at the summary data. We can see that the average function is this kind of release over time, which makes total sense to me.
  I also can see that I have a lot of variability, kind of in that
  60 to 120 minute time frame, which is fairly common.
  And I have some ability to clean up my data, but I happen to know this data is pretty solid, so I'm not going to mess around with that. What I do need to do is tell JMP which is my target, and my target is my reference listed drop.
  Now I'm ready to run a model.
  There are various models available, but I've used b-splines with a lot of dissolution data, and it seems to work very, very well.
  What JMP is going to do is it's going to find the absolute best statistical fit.
  This doesn't make any sense to my data, I know that my concentration of drug in media
  grows over time. It never dropped, so having these inflection points within the sections that I picked this function apart make no sense. All these areas are knots, and this is how we break apart a complex function into some pieces, to be able to get a better idea of how to model it.
  Well, I can fix this. I know that cubic and quadratic just make no sense, and I happen to know that six knots is going to work quite well, so I'm going to toss that in there. I can put as many as I want.
  Now JMP still gives me those nine knots. I need to have some subject matter expertise here. I think I can do this in six. I can see I don't gain a whole lot of model fit by going beyond six.
  But I do want enough saturation in this lower area, because this is where dose dumping might occur. This is where I'm really interested in determining
  if I'm having any kind of efficacious amount in the bloodstream. So I'm going to set that update.
  Now that one's not so great. I'm going to try again.
  Alright, so I get a very reasonable fit for this setup and I've got my points really where I want them. And I take a look at that, that makes a lot of sense.
  Now what JMP has done mathematically is it's seeing for this average function, there's an early high rate of increase, that's 83% of the explanation of the shape of this curve, what changes the shape of this curve, if you will.
  And I also see there's about a 15% influence of a dip. And I can tell you this is likely due to the polymers; I have fast and slow acting polymers so that makes total sense.
  And then we have another one that's maybe a very deep dive.
  Now we can play around with this if we want, but I'm just going to leave this with the three Eigen functions.
  Now this is my functional data analysis, the prediction profile we get is expressed in terms of the functional principal components, which can be somewhat difficult to interpret. We're going to move forward and we're going to launch the functional DOE.
  We do that, we can see that our inputs are now the inputs to the process, so we can see how changes to these inputs have an effect on our dissolution.
  But what we want to do is, we want to find the best settings. So what we can do is go into the optimization and ask JMP to maximize. And what JMP is going to do
  is it's going to find the absolute minimum of the integrated error from the target RLD. That's going to be the absolute closest prediction to our target.
  We can see that we have about 1,800 compression force, about 12% Polymer A and about 4% Polymer B.
  And as we move these to different points, we can see
  what the difference is from target, so this one has about 1.54 for 60 minutes.
  And at 120, it drops to negative 1.2.
  At 240, we get to negative .7. So it gives you an idea of how far off we are, regardless of where we are on the curve.
  It's time for a head to head comparison.
  Since we have more points in our functional DOE, we're going to use this profiler to simulate because I don't have the ability to make batches, but this is going to be as close an estimation as we can get.
  In the simulator, we can just adjust
  to allow for what we can see happening in the press controllers, as far as the amount of variation we see in main compression force, and we can adjust for some variation in Polymer A and Polymer B.
  Once we've done that,
  we can run five runs with the FDOE settings that are optimum and we can run five runs with the least squares optimum settings, and let's see how they compare.
  The simulations allowed us to compare what is likely to happen when we run some confirmation runs.
  And the thing that we see is the settings shown for the least squares model, which assume independence, are really not the settings we need. We have some bias. We don't get a curve that is as close to the target as we possibly could get.
  We use the FDOE, our optimum runs are very, very close to the curve and you can see that our main compression, our polymers are quite a bit different between those two results.
  Thank you, Rob, for explaining that. So there are some considerations that we need to keep in mind when we're using these methods. First of all, a
  measurement plan that should be...must be established with the analytical team or the laboratory team to ensure that there are enough early pulls
  of media to create a realistic early profile. And when we say early profile, it's before 90 minutes or so.
  And secondly, the accuracy and precision of the apparatus, that must be established to know the low limit as very small amounts of media may not be measured accurately.
  And the third one is the variation of the results within the time points, that must be known as high variability if it's more than 10% rst because that may require some other methods.
  So, now that we have established this method for the next steps, we would like to establish a acceptance criteria.
  We found that the model error of...for the functional DOE that seems to be greater at the earlier time points.
  That may be due to the low percent dissolved and the rapid rate of increase, so that is creating the high variability.
  And the amount of model error is critical for the establishment of acceptance criteria. There is likely a cumulative contribution of FPCs
  low...too low for practical use and the integrated error from target might provide evidence for acceptance.
  And the last one, creating a sum of squares for the difference from target of important time points that could allow for F2 similarity used for acceptance. However, some more work is needed to explore this concept.
  Well, thank you so much for joining us and we hope that this talk, this approach will be useful in your work.
  And we're going to be hanging out for live questions but we're very interested in your feedback on this method, especially any ideas on how to establish acceptance criteria.