BookmarkSubscribe
Choose Language Hide Translation Bar
Occasional Contributor

## Re: DSDs and sparsity of effect principle

@markbailey  thanks for your help. I wasn't trying to put words in your mouth, just trying to get clarity around what you're saying. I think I understand now.

Staff

## Re: DSDs and sparsity of effect principle

I recommend using the Fit Definitive Screening platform or the Generalized Regression (JMP Pro) instead of Fit Least Squares.

Learn it once, use it forever!
Contributor

## Re: DSDs and sparsity of effect principle

I'm replying here after reading all of  @markbailey and @MathStatChem 's input. I concur with everything they are advising or discussing. My recommendation was 'go slowly'. Start with a 'first' experiment...and I think we are all suggesting a DSD if that works logistically for you, as well as from an experimental design point of view. What I'm suggesting for after the DSD and it's results (statistical and practical) is...Pause. Take a deep breath with your team. Ask the following questions:

1. What have we learned wrt to solving the practical problem at hand?

2. Given what we've learned, what is the next logical step to solve the problem?

The answers to 1. can be multiple, conflicting, or just plain useless (but even then, one of the sage engineers I worked with when an experiment showed 'nothing' informed us all, "Hey, knowing an is 'not' can be just as valuable as knowing an 'is'." Try not to get too wrapped up in picking a specific DOE method (optimal design, classic RSM, more screening) at this point. Instead fit the problem with this hard won 'new' knowledge to the optimal next analytics steps. I think this is the essence of Mark's comments. DOE or otherwise. For example, what if you learn that your measurement system for the 'drug effect', that seems to be paramount here, is extremely noisy, making it hard to detect a signal above the noise? Subsequent DOE work could still be problematic...maybe it's resources better spent on the measurement system first? Or what if some factors and the levels you've chosen are persnickety? 'Persnickety' is a technical term I use when factors in an experiment are difficult to control, set, or have lots of known and unknown sources of variablity. In working with my biotech JMP process/product design customers over the years...any time they worked with 'cells'...well 'persnickety' was being polite.

Highlighted
Occasional Contributor

## Re: DSDs and sparsity of effect principle

Thank you. I think I have a good idea of the path forward now, and will plan to use a DSD as the first run.

Community Trekker

## Re: DSDs and sparsity of effect principle

Are the 10 factors you have chosen to experiment on all from the same process step.  Typically, biopharma processes are multistep (cell lysis, clarification/TFF, refold, chromatographic purification, buffer exchange, ...).  If the 10 factors are dispersed across several steps, then one nice advantage a DSD has is that these project into response surface designs for any 3 factors, and even if there are 4 factors you can build a nice prediction model.  Also, in these process the factors from one step don't directly interact with the factors from another step.  The can interact "indirectly", as the output from 1 step (which may be dependent on factors from that step) is an input to another step.  This "indirect" interaction is often smaller than any process factor effect, however.

Let's say that step 1 has 3 factors, step 2 has 3 factors, and step 3 has 4 factors.  Then you can run the DSD on the full process (Steps 1-3), characterize  and measure the product quality at each step, and build many different models.

A. A model to predict product quality after Step 1

B. A model to predict product quality after Step 2.  You can use the resulting quality measurements from Step 1 as covariates in this model.

C. A model to predict product quality after Step 3.  You can use the resulting quality measurements from Step 1 or Step 2 as covariates in this model.

Don't forgot to consider blocking when doing this type of work.  Often the block is associated with the analytical run used to measure the material coming out of the experiment, and you can block the experiments on "assay run" to account for any potential bias or confounding that will come from the assay.

All of that being said, I would definitely budget for follow on experiments, as often the DSD doesn't give you all the information you want about the process, or you may discover that you didn't choose the best ranges for experimentation, etc...

Occasional Contributor

## Re: DSDs and sparsity of effect principle

Thank you @MathStatChem . Unfortunately no it's 10 factors for the same step of process (production with cell culture). I definitely don't believe all 10 will be important, but it could be somethingl like 4-6 (although mayb even less). We just aren't sure because we don't have a whole lot of data with this process.

Community Trekker

## Re: DSDs and sparsity of effect principle

I'm guessing that you already have a good idea on what is important and what interactions may be present.

If you haven't already, it might be good to make an "interaction" map.  Just go through each pair of factors and assess if you think they will interact.  you can also consider if there will be an max or min output result in the middle of the design range.  If that is true, then likely you will also have at least one quadratic effect.  Then count up the main effects (10 in this case) and the number of potentially important interactions and quadratic effects, and make sure that number is less than the number of runs in the experiment.  You can also check to see if the two factor interactions are strongly correlated with each other, and if so you might do some shuffling of the assignment of the factors to the DOE columns.  Another advantage of adding the +4 additional runs in this case (10 continous factors) is that it creates a design with less confounding among the two-factor interactions and quadractic effects.

Staff

## Re: DSDs and sparsity of effect principle

Be careful.

It is more than potential interactions among factors across more than one processing step that should be considered. It is likely that the early steps represent a restriction on the randomization of the runs that involve the factors in the later steps. If that is the case, then a DSD is not appropriate. A custom design with appropriately defined hard to change and very hard to change factors is likely to be more realistic and effective.

Improperly considering the multiple levels of randomization and ignoring the additional random effects in the model can compromise the inferential information from the analysis.

I apologize if I misunderstood your suggestion.

Learn it once, use it forever!
Community Trekker

## Re: DSDs and sparsity of effect principle

That's a good point.  If you are not going to carry material all the way through the steps, then for that number of factors, you might be better off to just characterize each step separately.

I've done it both ways.