Real-Time Rescue of DOEs Deviating from the Plan: Case Studies in Biologics (2025-US-PO-2441)

In practice, things often go wrong, even with a well-planned design of experiments (DOE). Operators might make errors, or instruments might not cooperate. In biologics, experiments can be expensive, even on a small scale. What happens if something goes wrong in the execution of an already costly experiment? Is it possible to rescue the DOE as soon as it deviates from the plan without wasting any completed runs? We share two examples using the JMP DOE platform to plan, evaluate, and modify designs in biologics.

In Example 1, we planned an I-optimal design with five continuous factors and one blocking factor representing two instruments. During execution, the operator deviated from the planned instrument ID and realized the mistake after completing one-third of the runs. To rescue the design, we reassigned the block IDs for the remaining runs to the two instruments in an order that achieved comparable design properties to the original.

In Example 2, we planned an A-optimal split-plot design, but the scientists realized that the instrument could not tolerate the high pressure settings at the start of the second whole-plot runs. Due to resource limitations, we prioritized saving the completed runs. Since it is not straightforward to augment a split-plot design, we moved forward with a modified design that incorporated the completed runs, resulting in less desirable design properties, as acknowledged by the team.

These examples illustrate how we can adapt and rescue DOEs in response to challenges, emphasizing the importance of flexibility in experimental design.

 

 

Hello everyone. My name is Yang, and I'm with Johnson & Johnson Innovative Medicine. Experiments can sometimes go off track. Mistakes may occur due to operator error or instrument issues. In biologics, these experiments can be costly, so what do we do when an experiment deviates from the plan?

In this presentation, I will share two examples where the DOE deviated from the initial plan during execution. We will demonstrate how JMP tools can be used to evaluate, compare, and understand risk during experiment modifications and mitigations.

Case 1. We generated an I-optimal design to study five numeric factors runs. The experiment runs were designed to be evenly spread across two instruments, which served as the block factor in the design. After completing nine runs, the scientists discovered that four runs, which were supposed to be run on instrument one, were accidentally run on instrument two. After reviewing the design options, we decide to reshuffle the remaining runs to restore the desired aliasing properties.

The compare design feature in JMP is very useful for understanding and comparing our options in real-time. In this study, one of our criteria was to minimize the correlation between the block and the main effects. In the color map, blue indicates zero correlations and red indicates perfect correlations.

Looking at the color map, we see that the correlation between the main effects and the block is reasonable in the original design. For the deviated design, the correlations between the main effects and the block were higher as indicated by the whitish color.

After reshuffling the remaining runs, the final design restored the desirable aliasing structure. In the power comparison plot, black represents the original design, blue represents the deviated design, and red represents the final design. As you can see that the power of the main effects from the deviated design decreased, but the final design improved the power, making it comparable to the original design.

Additionally, in the fraction of design space plot, blue represents the deviated design. We observe that both the original and final designs provided lower relative prediction variance compared to the deviated design. In summary, the deviated design offered slightly worse aliasing structure, reduced main effect power, and increased prediction uncertainty efficiency.

However, we were able to restore these properties, making them comparable to the original design by reshuffling the remaining runs across the two instruments.

Case 2 is an A-optimal design featuring one hard-to-change factor, and two, easy-to-change factors. In this scenario, concentration is the hard-to-change factor. Based on historical experience, certain combinations of concentration and feed will challenge the instrument's tolerance level. As a result, we cannot explore the entire design space. Instead, we must define these allowable constraints for concentration and feed to ensure that all design points remain within the feasible region represented by the shaded gray area.

When scientists were running the second whole plot, they had to stop the experiment because the instrument was approaching its tolerance limit with a potential risk of failure. We quickly adjusted the constraints and factor settings, prioritizing saving the first whole plot runs that had already been completed.

After completing the last whole plot, the scientists discovered a system bias, which means some of the design points were actually running at the level higher than the original design points. As a result, the low end of the design space was not explored. Therefore, a whole plot has to be added to cover that area.

The final design looks like this. In the past, I've used argument design tools and the covalent candidate runs features in the custom design to modify the design made experiment, incorporating completed runs when deviations occurred.

However, augmenting a split plot design like this is not straightforward in the JMP 17 I was using. Due to limited resources, we had to prioritize preserving all the whole plot runs we already completed. We use compare design tools in JMP to evaluate different options and communicate potential risk to the customer. We observed that the aliasing structure of the original design worsened in the modified design and worsened again in the final design.

The power of the final design then increased, which is not surprising after adding an extra whole plot at the end. Although prediction and certainty increased in the modified design, it improved in the final design, likely due to the additional whole plot added at the end.

To wrap up, it is quite possible to mitigate the design while preserving all completed runs when deviations occurred in the middle of the experiment. The compare design tools in JMP allow us to quickly reevaluate design options and effectively communicate business risk even in complex resource-limited scenarios. That concludes my presentation. Thank you so much for your attention.

Presenter

Skill level

Intermediate
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎07-09-2025 08:58 AM by Community Manager Community Manager | Updated on ‎10-28-2025 11:41 AM

In practice, things often go wrong, even with a well-planned design of experiments (DOE). Operators might make errors, or instruments might not cooperate. In biologics, experiments can be expensive, even on a small scale. What happens if something goes wrong in the execution of an already costly experiment? Is it possible to rescue the DOE as soon as it deviates from the plan without wasting any completed runs? We share two examples using the JMP DOE platform to plan, evaluate, and modify designs in biologics.

In Example 1, we planned an I-optimal design with five continuous factors and one blocking factor representing two instruments. During execution, the operator deviated from the planned instrument ID and realized the mistake after completing one-third of the runs. To rescue the design, we reassigned the block IDs for the remaining runs to the two instruments in an order that achieved comparable design properties to the original.

In Example 2, we planned an A-optimal split-plot design, but the scientists realized that the instrument could not tolerate the high pressure settings at the start of the second whole-plot runs. Due to resource limitations, we prioritized saving the completed runs. Since it is not straightforward to augment a split-plot design, we moved forward with a modified design that incorporated the completed runs, resulting in less desirable design properties, as acknowledged by the team.

These examples illustrate how we can adapt and rescue DOEs in response to challenges, emphasizing the importance of flexibility in experimental design.

 

 

Hello everyone. My name is Yang, and I'm with Johnson & Johnson Innovative Medicine. Experiments can sometimes go off track. Mistakes may occur due to operator error or instrument issues. In biologics, these experiments can be costly, so what do we do when an experiment deviates from the plan?

In this presentation, I will share two examples where the DOE deviated from the initial plan during execution. We will demonstrate how JMP tools can be used to evaluate, compare, and understand risk during experiment modifications and mitigations.

Case 1. We generated an I-optimal design to study five numeric factors runs. The experiment runs were designed to be evenly spread across two instruments, which served as the block factor in the design. After completing nine runs, the scientists discovered that four runs, which were supposed to be run on instrument one, were accidentally run on instrument two. After reviewing the design options, we decide to reshuffle the remaining runs to restore the desired aliasing properties.

The compare design feature in JMP is very useful for understanding and comparing our options in real-time. In this study, one of our criteria was to minimize the correlation between the block and the main effects. In the color map, blue indicates zero correlations and red indicates perfect correlations.

Looking at the color map, we see that the correlation between the main effects and the block is reasonable in the original design. For the deviated design, the correlations between the main effects and the block were higher as indicated by the whitish color.

After reshuffling the remaining runs, the final design restored the desirable aliasing structure. In the power comparison plot, black represents the original design, blue represents the deviated design, and red represents the final design. As you can see that the power of the main effects from the deviated design decreased, but the final design improved the power, making it comparable to the original design.

Additionally, in the fraction of design space plot, blue represents the deviated design. We observe that both the original and final designs provided lower relative prediction variance compared to the deviated design. In summary, the deviated design offered slightly worse aliasing structure, reduced main effect power, and increased prediction uncertainty efficiency.

However, we were able to restore these properties, making them comparable to the original design by reshuffling the remaining runs across the two instruments.

Case 2 is an A-optimal design featuring one hard-to-change factor, and two, easy-to-change factors. In this scenario, concentration is the hard-to-change factor. Based on historical experience, certain combinations of concentration and feed will challenge the instrument's tolerance level. As a result, we cannot explore the entire design space. Instead, we must define these allowable constraints for concentration and feed to ensure that all design points remain within the feasible region represented by the shaded gray area.

When scientists were running the second whole plot, they had to stop the experiment because the instrument was approaching its tolerance limit with a potential risk of failure. We quickly adjusted the constraints and factor settings, prioritizing saving the first whole plot runs that had already been completed.

After completing the last whole plot, the scientists discovered a system bias, which means some of the design points were actually running at the level higher than the original design points. As a result, the low end of the design space was not explored. Therefore, a whole plot has to be added to cover that area.

The final design looks like this. In the past, I've used argument design tools and the covalent candidate runs features in the custom design to modify the design made experiment, incorporating completed runs when deviations occurred.

However, augmenting a split plot design like this is not straightforward in the JMP 17 I was using. Due to limited resources, we had to prioritize preserving all the whole plot runs we already completed. We use compare design tools in JMP to evaluate different options and communicate potential risk to the customer. We observed that the aliasing structure of the original design worsened in the modified design and worsened again in the final design.

The power of the final design then increased, which is not surprising after adding an extra whole plot at the end. Although prediction and certainty increased in the modified design, it improved in the final design, likely due to the additional whole plot added at the end.

To wrap up, it is quite possible to mitigate the design while preserving all completed runs when deviations occurred in the middle of the experiment. The compare design tools in JMP allow us to quickly reevaluate design options and effectively communicate business risk even in complex resource-limited scenarios. That concludes my presentation. Thank you so much for your attention.



Start:
Wed, Oct 22, 2025 05:15 PM EDT
End:
Wed, Oct 22, 2025 06:00 PM EDT
Ped 11
Attachments
0 Kudos