Cutting Costs, Elevating Quality: DOE's Impact on Immunohistochemistry Clinical Protocol
Cerba Research is a globally renowned company that specializes in delivering top-notch analytical and diagnostic solutions tailored for clinical trials worldwide. At Cerba Research Montpellier, our dedicated team customizes immunohistochemistry protocols to detect specific target expressions within patients' tissue sections. To address the escalating demand for protocol development and enhance process profitability, we recognize the vital need to streamline development timelines and reduce costs.
Given the diversity of custom protocols to be developed, the conventional OFAT (one factor at a time) approach is no longer sufficient. We have therefore undertaken an in-depth evaluation, comparing various design of experiments (DOE) methodologies, including custom design and space-filling design, using JMP. These DOE approaches are evaluated against previously developed OFAT protocols. We present data illustrating the comparative advantages of OFAT and DOE approaches, in terms of cost-effectiveness and quality.
Hello. I'm Marie Gérus-Durand, and I'm working at Cerba Research as a validation engineer. Today, I will show you how we can cut cost and elevate our quality by using design of experiments when setting up immunohistochemistry clinical protocols. As an introduction, Cerba Research is a worldwide company with activities in all the continents. I highlighted in yellow the department I'm working for.
It's the pathology and the IHC department. The main research department for IHC is based in Montpellier where I am working. What is immunohistochemistry? Immunohistochemistry is a study of protein or various targets in tissue. When you have biopsies, we cut releasing slides of the tissue, and you use antibody-based technology to detect different targets of a terrorist on the tissue that we can see under a microscope or with a scanner, on digitized images.
Here, for example, you have a skin tissue where you see a nice target, stained in red. More than a quarter of our activities in protocol set up for our clients at Cerba Research Montpellier. We have more than 50 clients. Each one of these favorite targets. It's crucial for us to be able to be fast best development strategy in order to stay competitive in a whole area of a protocol set up for clinical trials.
Here is an example of another staining, and this is the staining I will use in the first example. I will show you. It's detection of a target in brown here by IHC, so this protocol allowed to detect only one target. We call it a simplex, and it's a lot of what we are doing.
Our actual approach is one factor at a time, so we will first evaluate the antigen accessibility of the tissue for the targets to be recognized, then we will do a titration of the antibody to find the best condition, giving the best signal to ratio, and we will do further optimization if required. Each arrow here, it's a step in the development protocol, and each step is at least one automaton cycle, and each cycle last at least four days from the request to the test results.
To improve our grant profitability and amount of protocols developed, we need to reduce time and amount of tests to arrive to the optimized protocol. Here I schematize our strategy, so we test two antigen accessibility conditions with no antibody or a defined antibody consultation, and so we have these four points. Then we choose the best one so, it's condition two to do a titration of the antibody which has more points, but what if in fact, the optimal point was somewhere with condition one outside of the range tested?
When we look at this, it reminds me a lot of the design of experiments, and webinars that I saw with JNP. This is an example slide from a manual room, where you compare the offered approach one factor at a time which is one we are using now to the DOE and you see that you have a more covering of your experimental area. Which can be a great improvement, in our setup. I decided to give it a try.
First, I start with the simplest thing, which is simple protocol development as I show you, detecting only one target on this issue. We are first to define our constraints and parameters. In our automaton, the one I'm using for this experiment, this automaton has independent positions. It mixed with one test from the design of experiments is one slide in the automaton, and we have to define the responses and the factors.
The response we want to analyze is a signal intensity. We want to maximize it, and we want to minimize the background. The factors we can play with at this stage is antigen retrieval, which is a categorical factor, and the primary antibody concentration, which is a continuous variable. Let's go with the design. I choose to compare different designs.
I show you the Custom Design and the space-filling design. We would set up, together this design. I just go to the DOE platform and custom design, and we will have to enter the responses and the factors or responses from last year here. The first one is a signal intensity that we want to maximize.
I will assume the arbitrary evaluation of it between zero, which is no signal to three which is a signal, and I will add another response here. It is a background we want to minimize, so I should minimize this time, and I put the background here. In the same way, zero it's no background, and three is very high background, and then we will add the factors.
I'll just show you because I was a bit hidden here. As factors, we have the categorical factor, which is two-level, the antigen retrieval. It's quite easy to change, and we have two PH usually PH 6 and PH 9. Which is called differently depending on the automaton, but this doesn't really matter.
We have the continuous factor, which is an antibody concentration. This usually, we test 10 and maximum, so I will make it vary from zero to 10. Once we have said this, I have no covariates. As I told you, I chose the simplest one. We will continue with the model. I want to see all the possible interactions, so I do RSM. I don't do replicates. I don't do center points. I already did go to the simplest, and you see that by default, she said, okay.
You should do up 11 ones, 11 ones fits with what we are doing up, usually, so it's fine. To take advantage of this design explorer that is in JNP 17, I just click here and say, okay. Let's explore maybe different design. You see that when you click, you have the factors again, the model here, and you have different options, so on the left, it's just to express design by design. On the right, you can do a combination. For RSM usually, we do eye optimality, and once, let's say, I want to try different numbers of runs.
Let's say a minimum of five. Let's say, we would start with this, and we go up to, so we say 11. What if we do 15 for example, with the step of let's say one. Center points, I don't really care, but we can try and see what it does and replicate. Let's see, it's better if we do replicate.
I click generate all design, and it did also design with the different parameters, so I chose eye optimality, so it's eye optimality everywhere. I choose runs number five to 15, and then you have replicates.
It didn't but here's a replicate for five. I don't know why, but it's okay. You see for seven for example, I have two replicates, one or zero as the same for all of them. For center points as well, I have one center points or two center points when possible. At the bottom, you have a linen table so you can make it and it adds a nice column.
This is just custom-designed for all of them. If you choose a design in this table, you click Custom Design and it opens the table cost. Which is nice here such that you can do some graphs and say, okay. Let's see. Efficiency, depending on the number of fronts, I will do.
You see that there is a great correlation between the two of them. We can play and add other variables to look at with the column switcher. Once I want to see runs and I want to see center points and replicates as well. Sorry. I forgot. To select the runs, but anyway, so here we have the runs.
Center points doesn't seem to have a good impact and replicates neither, so it looks like it's ready. We should play with runs number. If I look at the runs number, I would come back to this. I would just block the users, the center points, and now you look just at the reference numbers.
I didn't remove everything. I hope you should remove everything before. Otherwise, it's just adding all of them. I removed everything. Just selecting. Didn't make a note. Sorry. It takes that, so let's say, I will choose, what was selected, so 12 runs.
I will go back just here and say, okay. Is it default? 11? In fact, I want to be sure to have my two negative control, which is no antibody at all with isotopes, so I would say 10.
I would make a design with 10, and then I will add manually my two controls. Here you see, it's a small design. You take only 10 seconds, and here is a table. In fact, each time we run it, it will do different numbers here. I will use the one I did before. Just running this script. I will have exactly the same data as anyone I'm showing you. Here it's a table, so you see it was different.
I have only 0 or 10 or five of the concentration, but let's keep this stable. In the table, you see you have the model already, the evaluation of the design, and you can go back to the DOE dialog box. Evaluation, I will not do it because I want to compare two designs. I will do it at this time. I just brought it, so the design I have, it duplicates for each point at the end.
Even if I did not ask for duplicates, it gave me duplicate. It covers this area of experiment space. I try a space-filling design. It's the same. Good DOE, but this time, it's special purpose design, space-filling here, and what I want to show you, so I'd need to go back to my Custom Design. I want to compare the design, but the responses and factors would be the same. I can just save my responses and my factors. Oh, I would just save responses, in the same way, save photos.
Then if I go back to the Space Filling Design. I can load the responses. So sorry. You have to have the window selected, so you find it. The same for the factors, and I wrote the factors. No constraint. Once again, number of warrants is 20, but I want to compare two designs, so I will do 10 the same.
I have no choice. It's just a flexible feeling. It generates this design, and you see that this time, the concentration is not strict like 0105. It's a huge range of concentrations, and you can do the table the same. I can close all of this. As for the Custom Design, I rerun the one I did, so I have exactly the same numbers, and so I wanted to compare the design. How do you compare the design? Oh, first. Sorry. I forgot.
You see that the so the variation in antibody concentration are not the same I don't have any duplicates except for this control here. It covers more broad range of antibody concentration, which is good actually for what we are looking for.
I wanted to compare the design. First, I can graphically because that's the area covered by each design is different. And in my point of view, the Space Filling will allow me to try more antibody concentration, which may be a good point, but I can compare the design. In DOE, you have design diagnostics, compare designs, so you see I have both of them.
I already have the Fast Flexible, so I will add the Custom Design. Columns names are the same, so I don't need too much columns. He will do it, you know it, and so recap to date the factors. For the Fast Flexible, it doesn't start from zero exactly and doesn't go up to 10, but it's nearly there. The model I will do to erase them. I cannot have the categorical factor in this, but I have. Let's go back to the antibody by antibody and antibody by antigen risk reward. I cannot get antigen at factor two, but it's okay, and we go for the design evaluation.
That's why I didn't do it one by one. I wanted to compare. In blue here, if you look at the power plot, it's a Fast Flexible Design, and in orange is the Custom Design. You see that [inaudible 00:17:23], Custom Design looks better in the name of four of the determination of the model than the Fast Flexible Design. If you go to the fraction of Design Space plots, it's the same.
Custom Design seems to fit better. If we compare the design diagnostics, it's relative to Custom Design. Meaning Custom Design adds one value, and if I look for high efficiency, Fast Flexible Design, I open seven, so it's 30% less efficient, let's say, than a sensor Custom Design. I just put this in here.
Just a three diagnostic I show you here, and it seems like Custom Design is better than Space Filling one [inaudible 00:18:18], but let's see what the results will tell us. The profilers are obtained after entering [inaudible 00:18:26] so just to show you an example of the data, so I have my DOE table here. I just added the signal intensity and background response, so I look at my images and say, okay. I have no intensity. I have some. I have more, and so-and-so on, and I did the same for both designs.
Here's the same. If this were not, it was two runs, so two different data sets. At the end, I did the model. Just click on the model. You see it's a standard square effect screening. With all the interaction here, and I fit everything together. I have some factors that I could remove, but I decided to keep everything. Then I go to the profiler here. If we look at the best condition, I will maximize my disability as it was done.
Maximizing my intensity. You see it goes to the high point here, minimize background, and he found this condition. After this, I realized that maybe it's not, I don't want to maximize signal intensity. I just want to match a target because maybe the sample I'm using here, it's not the sample with low, so I want to be able to detect low and expression of my targets.
I will change a bit, and I say, okay. I want to match a target, which is a middle intensity. Not diagnosed one, but the middle one. I still want to minimize background because background is not good. Now if I do maximize desirability, you see that it changed the concentration of antibody to use. This is nearly what we obtained here. Maybe I did a little number different, but it's okay. I did the same, and that is the same for a Fast Flexible Design.
I obtained two different conditions, one where I am nearly 3.8 of antibodies in CC2 and two in CC2. Here, it's not PHCCN9 because on this automatic, it's called CC1 and CC2, but it's the same. Then I say, okay. I have these two conditions. I would compare to the initial protocol.
Meaning our reference, which is our standard brush, which is the images I showed you before. Here, you have the Custom Design conditions and the Space Filling Design conditions. You see that these two conditions give data which has at least as good. I would say even better than the one we define with our standard approach. For me, the Space Filling data, I wrote to test more antibody concentration, which is useful when you have difficult targets.
I will go for this because in addition, this two condition you see here. In fact, were present in Space Filing Design. I didn't have to run against the conditions because I had the data in the Space Filling Design and the image is to double-check that is working well. I choose a Space Filling Design to test on another protocol. Just to see if it's working.
Don't go through everything. I just take the same paper and change my responses for the new targets. I obtained this model where to match the target of two here of central intensity and to minimize that background, he said, okay, you should have 2.8 of microgram per male of antibody and in CC2 condition.
If I compare it to the protocol we developed, it was said CC1 and 5 microgram program, so it's not the same. Both change, but as you can see on the image, it's the same protocol defined by Space Filing Design is at least as good as the one we usually develop. Now what is missing to convince operations and managers is for sure the cost and time. Our offered approach to the design here.
First, I compare as a Custom Design and Space Filling Design, as time it took me to design everything comparing to our standard approach. Our standard approach is in blue in this graph and Space Filling in red, Custom Design in green, and I compare for the number of cycles, which is time-consuming the number of slides, the time from design to protocol, and the time from take a request to reserves.
If you look at the time from technical request to research, even having to design everything, including that it's a new way of doing for the technical team, we shortened the time to results. Now with a second example, I did not have to design everything again. It's more likely what we will do next step if we take this approach as our new actual approach, and so I compare the standard approach used for this project.
In blue, in striped red, I put the DOE setup press the comparison cycle because as we have a reference protocol. I did a comparison cycle, but if it's in the new approach we are using, then we will not do comparison cycle because we will not have reference. We would just do the DOE setup, and it's in red. As you can see, it decreased the number of cycles, the number of sites, the number of antibodies we used, and the final results in days, it's shortened by ALF, which is quite nice.
In conclusion, I like this quote from Steve Jobs. "Start Small, think big." At the beginning when I saw all these [inaudible 00:24:53] oh, it's nice, but I can apply it. I struggled a bit, but I said, okay, I will start the simplest way, and I set it up for this simple IHC, and I convinced my manager even more rapidly set up when I thought. Then the operational team and eventually other leaders that the approach can be applied to IHC.
We cannot grant in case we need optimization as we do now because, you don't know at the beginning if it would be an easy or not an easy development. The next step, I would like to introduce to the lab. It's multiplex setup. Detecting multiple targets on the same tissue, starting with two targets against the suppressed one and going up to five targets. This is ongoing.
Finally, I want to thank you. I want to thank the steering committee to have selected my abstract I want to thank you, Cerba Research Montpellier team also the lab technician because it's sometimes hard to follow my crazy ideas.
My manager, which is already supportive in new, strategy and new way of seeing things, and all the conception and validation team. I am a part of to follow me in this test too. I want to thank the JNP French team. Stephane to being supportive and asking all the questions, and then that allowed me to load this new, JNP 17 person where I'm stuck with the 16 and be able to see the nice platform of design exploration, and I thank you all for your attention.