Choose Language Hide Translation Bar

The New JMP 16 Limits of Detection in Design of Experiments and Data Analysis (2021-US-30MP-879)

Level: Beginner

 

Laura Higgins, JMP Global Technical Enablement Engineer, SAS JMP

 

JMP 16 has new features in its DOE platform that incorporate the limits of detection into Column Properties and are automatically applied during modeling with JMP Pro’s Generalized Regression platform. I demonstrate these new features and discuss the impact of limits of detection on data analysis.

When we look for very small amounts of a substance, we are limited by the technology we are using. A substance could be present but undetectable is the lower limit of detection and has a huge practical importance for detecting low levels of a substance in critical situations, such as identifying the start of an infection or correctly identifying small amounts of impurities. Ignoring the limits of detection in data analysis has serious impacts; basic statistics such as means and confidence intervals become biased and misleading. Modeling data with incorrect assumptions about the limits of detection lead to erroneous models. Limits of detection have implications in such industrial applications as pharmaceutical and chemical manufacturing, analytical chemistry, and diagnostic test design. 

 

 

 

Auto-generated transcript...

 


Speaker

Transcript

Mike Anderson Hi there.
  Let's talk about the new feature in JMP 16, limits of detection.
  What I'm going to talk about today is background in definition of limits of detection.
  I want to go through some statistical considerations, if you are ignoring the limits of detection.
  I looked around and I found what I think is some real world evidence of lack of robustness in some designed products, specifically COVID-19 diagnostic tests and these products incorporate the limits of detection.
  And then finally, I"m going to show you how to actually use the limits of detection in JMP, I'll show you how to enter them, how to set it up in design of experiments, and how to analyze your data using limits of detection.
  How do we know if something is there?
  So that may sound a little bit like an esoteric question, but when you're using an analytic method it's less of an esoteric question and more of a question about how good your system is.
  So, for example, if we are trying to find a very small amount of impurity in something, at some point, our system, with which we are detecting it is not going to be able to detect amounts that are below a certain level and that limit that is called the limit of detection.
  There's a lot of different kinds of definitions of limits of detection.
  They are,
  they vary by field and also by analytical method so I've put a couple up here, the FDA says, for a molecular diagnostic assay the limited detection is the lowest concentration of a target that can be detected in 95% of repeated measurements.
  Now, if we go and look at what the chemistry world says, it defines a methods limited detection, as the smallest concentration or absolute amount
  of an analyte that the signal is significantly larger than from a suitable blank. So in these two definitions they're inherently about some different methods.
  And I looked through more literature and if you're doing something like high performance liquid chromatography they talk about signal to noise ratios when you look at the curves.
  If you're making calibration curves there's certain methodologies for that and there, there are a number of different definitions.
  Of the limits of detection, so I would encourage everyone to go and find the one that makes the most sense for their analytical method and then also follow your industry specific guidelines.
  In JMP,
  we define the limits of detection, as...it's a column property and beyond which
  the response can't be measured so it's limiting it and what this is actually doing when we analyze the data is it's going to censor the data.
  So the example I gave before of looking for a very small amount of impurity that would be a lower limit and that would be left censored.
  But it could also be an upper limit so, for example, if you're applying pressure to an object your pressure gauge may stop at some upper limit.
  You could actually be applying more pressure to it, but your gauge just simply doesn't go any higher and that would be an upper limit and those data would be right censored, but they're all taking care of the exact same way in JMP.
  So, I want to go through an example of what would happen if we just kind of ignore the limits of detection when we analyze our data.
  So in this example,
  the histogram below, you can see, I have a true population, and those are all my green bars but let's suppose I have a limited detection.
  At 10 and I can't get an accurate measurement below that, so it's not zero.
  I don't know how much the thing, of whatever it is I'm trying to measure is there, so how many particles there are, so the correct statistical method for this again is censoring and there's a very good talk by Michael Crotty from JMP in Discovery 2017
  and if you'd like more statistical details, you should go check out his talk.
  Alright, so let's go through this example so let's say we have this limited detection of 10 and
  what we're going to do is we're going to treat that part of the population as
  that we don't really know what it is, so
  here I have a histogram so again on the right here's my true population.
  And I have a couple of different scenarios here, so if let's say my limit of detection is 10
  and then another time it's 11 and then it's 12. So what I'm doing here is I'm taking this data and I'm actually pushing it up towards 10.
  Let me show you what this looks like. So this is not something that happens automatically with the limits of detection, this is an example that I've constructed.
  So, if you look, for example at row 7 my true value is 9.22.
  But if I have a limit of detection of 10.
  Then I'm not, I wouldn't actually know that value of 9.22 and the best I could do is I could just write it down as at 10.
  Alternatively, I could put it as missing or zero, but I think 10 might be a little more realistic.
  Now, if my limited detection was 11, I'd have to move that number up to 11 and, likewise, if it was my limited detection was 12, I'd move that data point and turn it into a 12.
  So what we see is especially if you look over here at 12 I've really bunched my data up here.
  The effect this has on the mean is, in my true population my mean is about 10.5.
  As for 10 it goes to 10.7 and then my mean is getting higher, it's getting 11.2 and then it's 12 it's actually the mean is at that limit of detection when I take all those values and turn them into that lower limit.
  So I'm getting far far away from my true mean here of 10.5.
  But look what happens to our measures of variation; my standard deviation is one in my true population,
  and it goes to 0.8 if the limit of detection is 10 and then it goes to 0.4 for 11 and then all the way down at 0.2.
  So my measures of variation are getting smaller and so I'm actually getting more confident about the measurement that I'm actually making, which is far away from what the true population actually is.
  So I've summarized that in the following graphs in the middle here, you can see, I have my true population.
  And I'm connecting the means from the other ones where I have imposed this false level of detection and you can see the means are dramatically getting higher and further away from the truth.
  The bars here are the standard deviation and you can see, those are getting smaller and then the orange shading is the 95% confidence interval of my name.
  Over here on the right, on the upper graph I've actually graphed both the mean and the median and you can see, they behave in a similar kind of way,
  which is surprising because you would think the median would be a little more robust but it's not in this situation it's really not helping you out.
  And below that I have my measures of variation and you can see that they get smaller as we impose those higher and higher limits.
  So it looks like we're doing really good, right? My confidence about my measurement of my mean is really getting quite,
  quite small, but the truth is I'm getting further and further away from that true population value and I'm getting more confident the further I get away.
  So this is really a concern that you wouldn't necessarily be able to detect what's really going on, and you will be falsely confident about things.
  So when we extend limits of detection into designed experiments,
  what you're going to see in a DEMO in a little bit is we're not going to find our critical factors, and we also won't discover the true relationship among our factors.
  Now, before I show you this in JMP, I really looked through the literature and tried to find some real world evidence, where I thought I could find a lack of robustness
  related to limits of detection in a designed product. And, what I did is I found that there's this nice internal comparison with COVID-19 diagnostic tests.
  So, in early 2020,
  there were more than 20 tests that the FDA gave emergency use authorization to.
  And they're all real time PCR tests and they're testing for viral load, so how many copies of the virus genome you have.
  Now, initially, because there was no standard - because it's something brand new that we had never seen before - companies could develop and use their own internal standard.
  Now later in the year, the FDA created their own standard and gave that out and reassessed the diagnostic tests. So, I can compare these two different time points of the same test using different standards.
  So what I decided I should do is just do the rank order of the tests. How do you know what a good test is and a bad test? How am I going to rank order these?
  Well, we can look at the analytic relationship between viral load, which again is the number of genome copies the virus is making
  and the clinical sensitivity of the diagnostic test. So, here on the X axis, we have how many how many genome copies there are per milliliter, so the viral load.
  And smaller detects mild cases. And we get over here far to the right we're really only going to be detecting really super heavily infected individuals, so super spreaders.
  And then, what the y axis here, this is the fraction of cases detected. And again, this is this is real evidence here, so what we can see by this black line is that there is a clinical sensitivity relationship to how good,
  I'm sorry, to how small of a
  viral load we can detect. So it's detecting a smaller viral load is good and that's what I use for the rank order.
  So I have two different time points, April 2 and September 15, and then for each time point I did the rank order of what the limit of detection was based on each standard.
  So if diagnostic tests are made, if they were made in a robust way we would expect more or less the same rank order and similar limits of detection.
  If there was a lack of robustness in how these tests were designed, then we would actually expect to see a change in the rank order and varying limits of detection.
  So what did I find?
  So here I've graphed the data that I found, and I have four separate
  panels here in my graph.
  And within each panel I'm connecting the same diagnostic test at the two different dates. So, for example, in the purple line on the left, I'm connecting
  the rank of April 2 to the one of September 15 and this particular diagnostic test was originally ranked 16th and then it went all the way to number 1.
  And also I'm graphing, I graphed, the final limits of detection. So, this one would have been much worse in a different panel if I was graphing the initial value from April 2, so this is the final value.
  So this test got much, much better and what we see is that there's a couple of other tests that dramatically increased their ability to detect dramatically increased
  the limit of detection and their ability to detect smaller and smaller viral loads.
  But we also see tests that got much, much worse and again they're they're changing by several orders of magnitude, so, for example, this bottom one here in red, it was in April 2 the third best test, but then it dropped, all the way down to number 14.
  So I think what this shows is that in general it's just a lack of robustness and how these tasks were initially made.
  I don't know how these tests were made.
  I don't know if they used JMP and a designed experiment and limits of detection, but I think applying these kinds of techniques could certainly make, make anything that uses limits of detection much more robust.
  So let's go and look at JMP.
  So how do I set up limits of detection?
  Well, you can do it directly in custom designer. There's actually now a box, where you can just simply type in the limits of detection.
  When you do this in custom designer on your output table it creates a column property that contains the limits of detection.
  When you analyze this in generalized regression, generalized regression will treat,
  will use those limits of detection and treat variables outside of the those detection limits as censored values.
  Let me show you how to do this in JMP.
  So here is what a data table would look like, that I set up in design of experiments, so how I got here is, I went to DOE, custom designer.
  And you can see right here is where I would put in my detection limits for each of my responses.
  It's boring to watch me type, so let me show you how I already have this setup for this particular data table.
  And here I have several responses.
  And I have a variety of different kinds of detection limits, so I have both an upper and lower detection limit for a couple of my responses. You can have one sided in either direction.
  So here, I just have a lower detection limit, and here I just have an upper detection limit.
  And you can certainly just leave it blank if you don't have a detection limit.
  And if we look.
  At the data table what we can see is you can actually see this as a detection limit.
  But I can select all of these.
  And just scroll through them and show you what the detection limit looks like. So it's just an other item in the column property. So here's my upper and lower.
  And again, I have an upper and lower.
  On this one, this is my, just my lower limit and on the next one, this is just my upper limit.
  Now viscosity doesn't have one at all, so if I wanted to go back and add this manually or if you're using limits of detection for analysis, but you did not set it up in a design experiment,
  you can just simply go to the column properties and it's here in the list. So these are all column properties that apply to a designed experiment.
  Just add detection limits, and then we can just type in whatever value we want, so we could say 1300.
  And
  for a one-sided limit.
  And that should just add it and then it will be there.
  So that's how you get limits of detection into.
  data tables and column column properties.
  Alright, so now let's talk about how to analyze data that have limits of detection.
  So for this, I want to analyze a experiment that was set up with custom designer and the goal of this experiment was to optimize the determination of a pesticide from water samples.
  It's a three factor experiment and it had 32 runs and I Optimal design.
  Nine of those observations were below the limit of detection of 1%.
  So I'll show you how to analyze the data in JMP.
  But then I do also want to make this additional comparison. so what if we didn't use or didn't have limits of detection?
  Well, my limited detection is 1% and again we're uncertain about what those actual values are so I guess, we could take zero for one case and then.
  So there's no there's been no pesticide in the water sample at all, or we can say it's right at that limit of detection and and use the value of one. So those are the two alternative scenarios. And what you're going to see is if we don't use the limits of detection,
  we're not going to find all of our factors and it's actually going to be kind of wildly misleading and inaccurate in our results.
  So let me show you how to do this in JMP.
  So this is what my.
  design looks like so I have Metacrate as a response and here are my upper and lower limits of detection.
  Here are my three factors.
  And I just did a response surface
  design so it's each
  factor, and then the quadratic and the two way interaction and I ended up doing a 32 run design.
  Here's what the data looks like.
  So again, this is the results in my design of experiment and in the cases where I have
  zero so that's what my machine says, my machine said I didn't detect anything, but the real case is that it's censored, and this is where I have detection limits are in here from setting up my design of experiment.
  And we're going to analyze this with generalized regression.
  It does limits of detection or not in regular least squares So while I could run this script I do want to show you how I actually set this up.
  So I have Metacrate is my outcome variable and we're going to use generalized regression.
  And here normal is okay distribution that it fits the data, but it results in negative prediction values and there's no such thing as a negative value for the amount of pesticide I have.
  So to keep it positive I'm actually going to change that to log normal so all of my prediction values are positive.
  And notice I don't have to put in anything for censoring I have the option here but JMP will automatically take care of this.
  When we do a design of experiment analysis and generalize regression, forward selection is a good choice, it's very similar to doing a stepwise regression or stepwise which you may be familiar with
  in the in the fit model platform.
  So what we can see in my model summary is it's reminded me that it has detection limits.
  And my RSquare is 0.91, so that's a really good fit.
  And if we look at the details of what's going on here,
  if you aren't familiar with a solution path and generalize regression what you're looking at is how the parameters entered into the model, so again it's forward selection,
  and then they're scaled for their magnitude of effect, so the first one that entered is my Dichloromethane, and it has a positive effect on the outcome.
  But the quadratic came in next, and it has a negative,
  a negative effect, so literally a negative estimate.
  And so, this is fun to see and interact with and, overall, the best model based on the minimal AIC score has five steps to it.
  Now I like to visualize the equation, with the prediction profiler.
  And I'm just going to simplify this for a minute and make this a little larger.
  So what the prediction profiler shows us, this is our outcome variable Metacrate and it's the y axis here then each square is one of the factors in our experiment.
  And what we're looking at is the equation, or the relationship
  graphically. So we can see interactions because the shape changes for the other factors when I change this when change dichloralmethane, and you can certainly see there's a lot of curvature going on in these parameters.
  I also have my actual by predicted
  plot and that actually looks pretty good.
  So what I want to do is, I want to maximize the amount of Metacrate I can detect, that's why I did my experiment. So, I will ask JMP to find the maximum level.
  And what it found is the maximum amount of Metacrate I could detect is 38.6.
  And that's for these factor settings, so my water sample is just under six for this amount of methanol, and this amount of dichloromethane.
  So this is the outcome of my experiment, and I would want to go back and actually test this and make sure that this works.
  Now, what if I ignored the limits of detection and I just I tried something else.
  So here, I have the exact same data.
  With Metacrate with zero, but what I've done is I've added a couple of columns. And one column, I said, well, maybe it's actually at the lower limit, it's at one right, so instead of zero I just changed these values to one.
  And then, alternatively, well, what if what if there wasn't anything in those samples to be found, and then that would be zero and in this case I can't use zero because it's log normal, so I just picked a tiny, a very small value
  so one times 10 to the negative 12 right so point .00000000001 think that's right number of zeros.
  So let's let's analyze the data with these two conditions and again we're not going to use the limits of detection.
  So instead of Metacrate I'm going to put in the first one, where we're just going to put reset all those values those values to right at the lower limit of detection.
  We're going to do forward selection.
  So there's no, you can see there's no limits of detection here like there were before my RSquare is just under 0.7.
  But you can see from both my solution path and my parameter estimate table,
  I only have three steps, forward steps to my modeling and I didn't pick a lot of stuff up, I've got a lot of blank spaces here and, if we look at the picture of my
  equation down here in the profiler you can see I'm actually missing one of my factors, we can compare it to this one over here, I totally missed methanol.
  And if I maximize
  to find the the largest amount of Metacrate I could find that's only about 16.
  And my actual by Predicted plot does not look very good it's does not look very happy.
  So that's one situation right and again, so we just totally missed one of our factors we just simply didn't pick it up.
  And then let's look at the other situation if, instead of one What if it's almost zero?
  Not quite zero but very, very close.
  Forward selection.
  Again we're not using limits of detection here but look at this my RSquared is 0.997.
  So it thinks it found a really good fit.
  But there's still something strange that's going on here, I only have three steps in my model I haven't really found much at all, I found
  my dichloromethane and I barely done just picked up my sample volume.
  If we look at the predict prediction profiler.
  Again I'm missing methanol, so I don't have one of my factors my actual by Predicted plot looks so so, but what on earth is going on here where on earth is the top of my curve.
  I'm going to turn on this adaptive y axis, to make sure I can see all of it.
  And it has prediction values that are up around in the hundreds and if I just,
  I won't, I could run the maximize desirability, but you can see it's over 470, even if I just manually try to pick something. Now remember,
  my limit of detection was 99 so these values they they don't even make sense.
  And
  I missed a factor, it doesn't make sense, but part of why I really want to show this example is I have seen this at customers,
  and I could not explain to them what was going on, so I know that this actually happens if people are not using the limits of detection,
  you get some very, very strange outcome because you've made a decision about how to treat the data in your analysis, you've treated it either, is a one or a zero.
  In this case, these are the two alternative scenarios, but they are far, far from the actual reality of what the real ability to detect this pesticide is in the water, and the outcome of my design experiment.
  So.
  The key takeaways that I really want everyone to leave with is number one, limits of detection they're very important.
  And you should not ignore them and don't replace them with the one with the high or the low value and you're really
  you should incorporate that uncertainty into the actual analysis. And JMP makes this very easy to do. You can do it in the design of experiments platform, you can do it in a column property.
  And then, once they're there, using generalized regression, you can do an analysis that incorporates the limits of detection and it's going to give you the very best model, you'll find your factors and the real relationships between them.
  And finally, we at JMP, we really do encourage people to go back and re-analyze data, where you may have seen some of these strange outcomes.
  It could be so you, you should know when you have a limit of detection based on some kind of analytic method
  and we really do think people should go back and re-analyze that data and using limits of detection in your design of experiments it's really going to get you where you're trying to go much faster and much more accurately.
  Thank you very much for listening to my talk and if you have any questions or like some follow up please feel free to reach out.