Choose Language Hide Translation Bar

MSA and Optimization of a UHPLC Measurement System (2022-EU-45MP-1002)

An ultra high performance liquid chromatography (UHPLC) measurement system analysis (MSA) optimization and validation study in a quality lab is presented. Process settings of the analysis method are established in order to maximize measurement accuracy and resolution of two organic compounds. There are seven control factors and optimal DOE is used to specify how the experiments take into account specified model and experimental criteria. It demonstrates why OFAT is not appropriate and how to decide between a custom DOE and a DSD based on DOE diagnostics such as power, effect correlation and variance profiles. Using stepwise regression, good predictive models are obtained that are supported by validation experiments. The Profiler desirability function is used to determine the optimal and robust UHPLC settings for measuring both compounds. The particular importance of the sensitivity indicator for improving robustness is shown.

 

 

Hello everyone.

This presentation will be about Measurement System Analysis,

and optimization of a UHPLC M easurement System.

Presenters are myself, Frank Deruyck

from HoGent University College of Applied Sciences,

and also Volker Kraft from JMP Academic Program

will take care of the demos

demonstrating the JMP tools for data analysis.

Okay. The problem statement and description.

Well, in a chemical company,

just [inaudible] chemical company,

the presentation was inspired by an internship of a student

and a chemical company, and of course the material had to be kept confidential.

So I just will talk about the chemical company,

and also the figures are a little bit modified, but no problem.

What is the statement?

Well, SPC revealed significant batch to batch variation in raw material

and has caused problems in product quality.

So that, okay,

there was an issue with the supplier that was necessary that all supplied batches,

it was necessary to analyze them all.

But of course one problem is that

there was a too slow analysis procedure GC,

and a fast UHPLC analytic method is under development,

was in development, but not ready

for validation because of too strong measurement variation.

And the goal of this study is to specify robust and optimal settings

of the UHPLC method so that validation of the new method will be possible.

Thanks, Frank.

Working with the JMP academic team for more than ten years now,

we helped many university professors worldwide to get access to JMP licenses

for teaching, but also to teaching resources like the case study library

at the link, jmp.com/cases,

professors get free access to more than 50 cases,

each telling a story about a real world problem

and a step by step solution, including the data sets and exercises.

What we present today is available as a series of three case studies,

focusing on statistical process control,

measurement systems analysis, and design of experiments.

While Frank will talk about the problem

and the solution they developed for a Pharma company in Belgium,

I will demo some of the analysis steps using JMP Pro.

Let me say thank you to Frank for sharing these cases with the academic community,

who really welcome such real-world examples

coming from practitioners in the industry.

I also want to thank Murali, from our academic team in India,

who plays a key role in enhancing our case study library

like the development of these cases together with Frank.

Okay, here in this plot you see very clearly the illustration the problem.

So what here is shown is a plot of the measurements of the new UHPLC method,

but non- optimized and also as a function of the measurement

of the GC standard method, which was very accurate and precise.

And you can clearly see that there are some problems.

So different operators made some measurements on different batches

and you can see that the prediction intervals are quite large.

You see a range sometimes over 100 milligrams per liter.

And you can also see that sometimes it's not clear whether a measurement

is within specification, like on the left graph you can see it

and also on the right graph you can see there's also ratios with accuracy

meaning that there's a serious problem,

and first of all we will explore the variation root causes

using measurement system analysis,

the causes of variation, and also the DOE will use optimized

according to the statistical thinking concept

which will be illustrated in the next slides.

You see the statistical problem solving process flow.

So for the cost of the problem,

the problem with our U HPLC measurement error,

we will tackle this by measurement system analysis,

and to export the variation root causes, I may use of course DOE,

also to optimize the process settings of the U HPLC system.

The method we will use for quantifying the variation sources is

the measurement system analysis, and some theory I will show.

So it's about the quantifying of the variance components of the total variance.

Total variance means the variance by all measurement,

by different operators, different products.

So we have two different components,

the product variation,

Sigma square product, and the measurement variation.

The measurement variation very important is also decomposed

in two important components, the repeatability the variance

due to lack of precision by repeated

measurements, and also the values between operators.

So the Sigma square large R which is the reproducibility.

And very important criterion in order t hat, sorry,

important criterion

stating that the measurement system is only suitable for detecting variation

in the process, process variation, is that the percent G auge R&R, which is the

measurement error divided by the total error, should be less than 10 percent.

So the fraction of measurement error was

below 10 percent total of the total variation.

Then we can use it for process follow up.

If it's not the case, it is higher,

then we run the risk that we will control our process

on measurement variation, and of course not a very healthy issue.

So it must be lower than 10 percent, that's the main criterion.

Okay, let me go to next slide.

And for this we will use a Gauge R&R study, a Gauge R&R study

is mainly an experimental design,

so that we will select three random operators,

John, Laura, and Sarah,

who will do repeated measurements two times on four different batches each.

So we have done,

we have been able to quantify the within operator variation

repeatability, the reproducibility, the between operator variation,

and also the product variation, the variation between batches.

So, Volker, I'll leave the floor to you now.

Okay. Thank you, Frank.

And before I will come to MSA,

I would like to briefly cover what's included in the first part of the series,

namely control charts and process capability.

So this is one of the data sets here

measuring the two compounds, our continuous responses,

compound one and compound two, using the good but slow GC method.

So data has been collected over eight days,

with two batches per day and for two different vendors, A and B.

So when the team started,

the first activity was to check and confirm normality of the data

for this, they looked at normal quantile plots.

They also fitted a normal distribution.

Okay, sorry for this.

So they looked at a normal quantile plot,

and they also fitted a normal distribution followed by a goodness of fit test.

And there was nothing critical from that analysis.

Exploring distributions, the problem became clear.

So looking at data from different days, we can see a huge

batch by batch variation, even for batches coming from the same day.

And this means that e-process monitoring is not possible

because

the variability of this GC method was high

and the method was too slow.

So it needed a lot of time

to monitor the batches.

And therefore one team activity was

to work together with the vendors and others to reduce that variation.

And another activity,

which is described in the other parts of the study, investigated a faster method,

as mentioned by Frank, using the new UHPLC measurement method.

So just looking a bit more into the old method, the GC method.

So the team looked at one dimensional

control charts and also process capability for both compounds, of course,

and they also looked at multivariate and model driven

control charts.

And here we see that there were some extreme points in a multi dimensional

analysis, and they could also see the contributions of the different

compounds or responses.

This is the conclusion of the first investigation using the old method.

So here the outcome was that

both processes looking at the process performance here, that both processes,

so compound A and compound B, were incapable but stable.

For vendor A

and for vendor B,

we see that compound one was even unstable following these colors here.

So all of this motivated the team to improve the measurement process.

And this brings us to part two of this series

and this is about analyzing the measurement system.

So this data here were collected for all the combinations, repeated twice,

between four batches and three operators,

using the new UHPLC measurement method.

So the goal here was to measure all

batches of raw material using this faster method and maybe to allow some

inline monitoring of the raw material in the future.

So to get started with the new data,

the team looked at a two way ANOVA, and this may fool you.

This output.

So on both compounds,

compound one and two, the batch effect is highly significant.

So that's good news.

And also the operator effect and the interaction between batch and operator,

these effects are non- significant, but the RMSE is quite high.

So that means that we are maybe looking at data

and the effects we are also interested in are just hidden by noise.

So before you look at such an analysis,

the first question should be where is the variation in our data coming from?

And second, are we measuring the signal?

Are we really looking at the signal or are we just measuring noise?

And to get an idea about this,

a perfect visualization of the patterns of variation is a variability chart.

And for these two sources of variation, batch and operator,

here we see all the data points for all operators and all batches.

So we have two measurements per batch per operator.

And for these two, we see the mean.

We also see the group mean for one operator and we also see the overall mean.

That's the dotted line here for all our measurements.

And we can look at this for both compounds, of course.

So here, for instance, for compound two, we see that Laura has

quite high variation, at least compared to the other two operators.

So that's a visual analysis.

The analysis method and procedure to use

to get a better insight into the measurement system's performance

is an MSA or measurement systems analysis.

And this was also done for the non- optimized U HPLC method.

So here, for instance,

for the first compound, we see this average chart

and this chart shows the data together with the control limits

or with the noise band.

And what we see here is not good news at all,

because our measurements, our data is within this noise band,

so it will be really hard to detect any signal with that noise level.

Another output is this parallelism plot.

So here we can check interactions between batches and operators and this would

indicate an interaction if some of these lines are not parallel.

And this is the EMP method.

So this stands for evaluating the measurement process

and you probably know this as Gauge R&R output.

And that's what also Frank mentioned.

So here we see the signal, that's the product variation,

but we also see the measurement variation

split into repeatability and reproducibility.

And here for the first compound,

we see that we seem to have an issue with repeatability.

So the same operator doing the same measurement again.

And for the second component,

we see that there's also a slight issue with reproducibility as well.

So these are measurements between different operators.

So the conclusion here is that the measurement system

or the measurement process is unable to detect any quality shift caused

by a significant systematic variation between our batches.

And with that, I hand back to Frank.

Okay, thank you, Volker.

I think we can shift to next slides because that's what we discussed.

Okay, it's what you showed, Volker, that's also in the slides.

Okay, yes, here we can start.

And we start with first of all, in the design of experiments,

the optimization study, the process improvement study.

And first o f all,

in order to better specify our experimental goals, we should first of all

go to the root cause analysis for the high measurement error.

The lab team,

after a brainstorm with the lab team,

it came out that were two main

root causes, one link to the equipment and one link to the method.

And the one link to the equipment was

the main source for poor repeatability because it was a very strong issue

with unstable column temperature and eluent flowrate,

the UHPLC uses column, it's a chromatographic technique,

and also an eluent to make separation,

to lead the compounds to the column and make separation possible.

As a matter of fact it was drift between

different experiments and even within one experiment.

This results of course in a poor repeatability.

And the first task, of course, of the lab team

was to stabilize of course the column temperature and eluent.

Because if you go to any experimental design,

of course, we need to have fixed settings of the temperature

and eluent flowrates, which are of course two important factors.

For the second issue,

the method because, besides the stability problem,

which of course was fixed, there was also an issue with

low resolution because of non- optimized analysis process settings which were quite

low and also unstable, meaning that when we make small shifts to flowrate

and temperature, there were sometimes huge shifts

also in resolution, indicating that there was also not only

optimal problem but also a problem with robustness.

So the goal was specify not only optimal settings but also robust settings.

So the variation of the resolution should

be minimal as a function of some variation around the settings

of the process, practice, of the analysis.

Let's just now go to the DOE.

And the goal, of course, of the DOE is that

we specify the response variable is Y, the compound concentration

in the standard samples, and we have to make models of this Y,

the compound concentration is a function of the UHPLC control factors.

And once we have the equations,

we can then of course go to optimization, and what is the optimization criterion?

We will use the quality P over T ratio criterion.

That means that,

that our,

fraction of the error

in the tolerance range should be lower than 10 percent.

If you have the tolerance ranges of our compounds which are specified,

compound one

in the standard sample was a 300 milligrams per liter

plus or minus 200 milligrams per liter,

then we have to make sure that and also the, sorry, the standard sample, two,

two standard samples, the target of compound two and the standard

sample two is 450 plus or minus 150 ppm spec limits.

And of course if you want to reach 10 percent

of this specification ranges, that means that okay, we have to

given our desirability function for optimization, is that okay?

Why should match target compound concentration lower than 10 percent?

Meaning that for standard sample one there

should be 300 plus or minus 20 ppm, 20 milligrams per liter .

And for standard sample two the compound concentration should be

450 plus or minus 15 milligrams per liter

that's the criterion for optimization.

So as for our model getting together with lab experts.

The factors are the main effects, the main effects and all quadratic effects.

And the main effects are the temperature of the column,

with the range 25 to 35 degrees Celsius, eluent flow rate,

five to 15 milligrams per milliliter, and also a gradient.

What is a gradient?

Well that means that there is an additive, Acetronitrile, in the eluent and this

concentration of this Acetonitrile increases as a function of the volume,

added as a function of flow, the volume, through the column.

That means

from volume zero and one milliliter it's a range between five and 20 percent.

And once the volume is five to six milliliters,

we have a range of 35 to 70 percent of Acetonitrile in the eluent.

Also an important factor is the UV wavelength .

The detector is by UV and it should be controlled between 192 and 270.

And brainstorming with the lab experts, who did already quite some preliminary

experiments and had some experience with the UHPLC,

as a matter of fact, only two interaction effects were selected.

That's the temperature and eluent flowrate and also

the eluent flowrate interacting with all gradient factors specified above.

So this means that the design chosen

to meet the goals and to model, and to specify the model parameters.

It was the custom design, and Volker will illustrate what design was about.

Okay.

Thank you Frank.

So talking about the third part of this case study series which is about designing

experiment and what we learned so far is that we have to reduce the measurement

variation which is caused by this non-optimized UHPLC method.

And this method can be described, as Frank pointed out, by these control settings

or process settings, like temperature and so on.

So we also want our responses remain within their limits.

So this follows the 10 percent rule as given by Frank,

and these limits are also added to our data.

So to design such an experiment the team looked at a definitive screening design.

So this one here, and also added custom design and both with 25 rounds.

So comparing both designs, they use the compare design platform,

and you can see several reasons which are in favor of the custom design.

So here, for instance, for main effects we see slightly better power for the DSD.

But for the other higher order effects we see

really a high benefit for the custom design.

Same here, looking at the fraction of the design

space we see that the custom design is doing better.

You could also look at the correlation maps,

and finally, the efficiency is also in favor of the custom design.

And for that reason the team used the custom design,

a custom design for those studies.

So here we have the completed data

for the custom design, completed with both response measurements.

And for this we also have

the corresponding linear model.

So here for the first response,

compound one, and for compound two, both with their profilers of course.

And here's a combined profiler

with both responses at the initial, just the mid settings.

And by maximizing the desirability,

so I would get to the optimal settings and we see that we are matching

both targets here, 300 and 450 respective, we are matching them perfectly.

However, we also see quite large sensitivity indicators.

These are the purple triangles, and they are telling us that at the optimal point,

our response surface is quite steep in some dimensions

and this reduces the robustness of our process

in case of some random variation of our process settings.

And this can be further analyzed by adding

the simulator to this profiler and that is done here.

So here the simulator

defines the random variation which was defined by our process experts.

And just keeping the mid settings, plus this random variation, simulating

10,000 of response values we see that all of our response values are out of spec.

These are all defects. Of course, we are just at the mid settings ,

so nothing better to expect here.

Switching to our optimal settings and simulating again.

So now we see that the defect rate to be expected is above 12 percent.

So from here the robustness can be further improved,

either manually, using the profiler and the simulator

and the sensitivity indicators, or automatically by running

a simulation experiment which is also built into those profiles.

So the team used a manual approach

and these are the robust settings.

They came up with these red settings here.

And if I simulate again, §we see that the defect rate

now drops below one percent.

So this is a Monte Carlo simulation, it's all random so the defect rate changes

slightly with each new simulation, and you can also see how the histograms

of our simulated response data behave quite well.

So they are within our range, within our limits

which support this 10 percent rule.

And, at these

robust settings.

So going back to the other profiler.

So here I also have

some contour plots using the contour profiler, and they can also be used

to better understand the best regions for the processing and for configuring

the process and these are typically the white regions which provide

the in spec regions for a combination of two control factors.

And I hope you like this journey,

and with this I hand over back to Frank to discuss this outcome.

Okay, thank you, Volker.

So,

now we can go to the validation experiment,

of course, once we optimize the settings

now we can make an experiment to check whether really the measurements of the new

UHPLC system and the GC analysis are equivalent.

So for this, again, we set up a Gauge R&R study,

measuring also one

which is quite similar to the one

discussed before but also now we make measurements with the GC in order

to compare the two measurements methods. You see here also included as one extra

factor in the Gauge R&R is the instrument factor.

Okay.

So here are the results and you can see

really that we make quite an improvement that you see in the Gauge R&R results.

That the main variation now is product variation.

You see that now the batch variation is no more obscured by noise measurement error.

The gray area is quite narrow compared to the ones

we had before, and that's a very good news.

You see it's mainly product variation.

So the Gauge R&R and ratio is quite, matches our target.

Meaning that precision tolerance ratio here is about eight percent.

And the precision/tolerance ratio is six times the Gauge R&R figure

divided by the tolerance of the compound , 400, which is eight percent.

So the precision is okay.

So that this measurement system is suitable

to be used in quality control for compound one.

Nice.

Compound one. Okay.

You see on the parallel plot just a little crossing,

you see that for Sarah, indicating a little

maybe interaction between the batches and operators.

Okay, for compound two, we have to see the same thing even better.

So only 5 percent of Gauge R&R,

the precision tolerance ratio, 5 percent,

same thing, very narrow noise range and no major crossing of the lines.

Quite parallel, indicating no operator bias, no interactions.

Modelling the compound one analysis, we see that it's mainly

influenced by batch, and also little batch to operator interaction effect.

Compound one and for compound two.

Okay, now we can see this very small

interaction fact because we have reduced the measurement noise so much

now, that very small facts, of course now become visible.

In the first time we could not see it, we could not detect it.

But that is because of a very poor experimental power.

But now we increase

the experimental power

seriously by just reducing the experimental noise,

of course, now this interaction in fact becomes visible.

But the small one, you can see that also

for the link to Sarah, the green line of Sarah,

but here also was a little problem in the GC

analysis as well, not only for the UHPLC, but also for the GC analysis.

Okay, that's an issue to be tackled for wrong.

But now it's here.

Those two graphs illustrate fairly clearly

that both measurement systems are nearly equivalent, the UHPLC results

versus the GC results.

We see that there's nearly a perfect,

very good correlation with all points on the mid line.

So the slope is nearly one.

It's not significantly different from one.

And also there is an intercept with the y axis,

is not significantly different from zero.

So the method was ready for validation, UHPLC is accurate and the

non-significant difference, which you see, standard analysis , so very nice results.

And we could say that the problem with tackling the problem with MSA and DOE

was very powerful, leading to a good, very nice solution to our problem

which could be implemented, of course, now in production.

Thanks for your attention and if there are any questions,

please let us know.