Manchester, UK 5-7 March
Agenda

Paper
Poster
Workshop

5 Mar Tuesday

  • 13:30-18:30

    • Early Conference Registration

      Add-on Workshops (£150 ) - Whitworth 1, 2, & 3

      Room
      Ballroom Foyer
  • 19:00-21:30

    • Welcome Dinner

      Room
      The Refuge and Winter Garden

6 Mar Wednesday

  • 6:30-8:00

    • Breakfast

      Room
      Refuge Restaurant
  • 8:00-8:45

    • Discovery Expo

      Room
      Ballroom
  • 8:00-9:00

    • Registration

      Room
      Ballroom Foyer
  • 9:00-10:15 Plenary

  • 10:30-11:15

    • Data Cockpit: Ways to Enrich a Master Table with Additional Data

      Siltronic AG is one of the global technology leaders in the semiconductor wafer industry. For the production of silicon wafers, many process steps are needed, beginning with crystal growth and ending with packaging the wafers.

      In a complex production environment, a lot of data is generated, so different details are important from different perspectives. Data may be saved to databases and other resources that JMP is generally able to load. This data certainly does not fit into one data table. In this talk, I show how a master data table can be extended with additional data, depending on the context. I also give examples of how to query additional tables that contain more detailed information and that are linked to the master data table.

      The master data table is provided with a set of table scripts to implement the described features. Detailed JSL snippets are discussed to realize the described functionality.

      Room
      Oak Room
      Skill Level
    • Graph Builder 30-Second Challenge

      The potential of Graph Builder is well-known by many users. I want to demonstrate that potential by giving myself several challenges to create powerful visualizations in 30 seconds each. These challenges range from very easy to very complex, including a mosaic plot, trend charts, control charts with drill down, filtering and column switching, heat maps with lots of variables, and dissolution curve comparisons with (multiple) drill downs. To be fair, for the most complex I may need 45-60 seconds, but still think that's pretty powerful. The timer is running! ;)

      Content.JPG

       

      Room
      Whitworth 1
      Skill Level
    • Cutting Costs, Elevating Quality: DOE's Impact on Immunohistochemistry Clinical Protocol

      Cerba Research is a globally renowned company that specializes in delivering top-notch analytical and diagnostic solutions tailored for clinical trials worldwide. At Cerba Research Montpellier, our dedicated team customizes immunohistochemistry protocols to detect specific target expressions within patients' tissue sections. To address the escalating demand for protocol development and enhance process profitability, we recognize the vital need to streamline development timelines and reduce costs.

      Given the diversity of custom protocols to be developed, the conventional OFAT (one factor at a time) approach is no longer sufficient. We have therefore undertaken an in-depth evaluation, comparing various design of experiments (DOE) methodologies, including custom design and space-filling design, using JMP. These DOE approaches are evaluated against previously developed OFAT protocols. We present data illustrating the comparative advantages of OFAT and DOE approaches, in terms of cost-effectiveness and quality.

      Room
      Whitworth 2
      Skill Level
  • 10:30-12:00

    • JMP Lab

      Participate in testing features, provide critical user feedback, and experience new innovations first-hand. You will have the opportunity to directly influence the development of JMP, helping to enhance functionality, ease-of-use, and the overall user experience.

      Room
      Oak Lounge & Boardroom
  • 11:30-12:15

    • Control Charting Dynamic Processes

      Many industrial processes involve parameters that are dynamic for at least part of the manufacturing sequence, e.g., chemical batch reactor temperature or pressure, autoclave vacuum ramp. It is important for batch-to-batch consistency to find a way to compare these dynamic phases of production or setup in a similar way to the use of controls charts in monitoring steady state production.

      JMP can be used, with a few relatively simple steps, to create dynamic control limits from a number of representative batches. Future batches can then be overlaid in Graph Builder to ensure they fit within these historical limits. This demonstration shows the steps used to create the control limits and the use of the limits for batch-to-batch comparisons. Some of the challenges involved with this type of data are discussed.

      Room
      Oak Room
      Skill Level
    • Going Quantitative: Text Analysis on Surveys and Voice of the Customer

      Text Explorer in JMP is a very strong tool that can be used to organize the responses from a survey or a series of voice-of-the-customer (VOC) interviews and make them ready for testing. In this presentation, I go through all the steps, using a real example from my organization. Surveys are based on either closed-ended (rate using 1-5 stars), partially closed-ended (multichoice answers from a list of options) or open-ended (free text field) questions.

      As expected, there is much more variation in the respondents' answers for open-ended or partially closed-ended questions. Text Explorer offers a way to code responses as terms and phrases. They can then be processed in many ways, enabling predictive modelling and hypothesis testing, which is demonstrated in this presentation.

      Text Explorer can also be used to group VOC responses into topics later used as quality drivers in a CTQ tree. Constructed the right way, this method, which we demonstrate in JMP, can save time, while also enabling new insights.

      Going from qualitative to quantitative analysis of surveys and voice of the customer is one way to establish an organization that remembers.

      Room
      Whitworth 1
      Skill Level
    • JMP Live 18.0: Communicating JMP Insights Gets Even Easier

      JMP Live has always been about taking what you've discovered using JMP and communicating it to other people in your organization, whether they have a JMP license or not. It facilitates presenting JMP discoveries in a way that retains JMP's interactivity, something that PowerPoint simply cannot do. And JMP Live will keep your insights up to date by allowing you to easily schedule data updates that automatically regenerate the reports, keeping them relevant while you sleep.

      JMP Live 18.0 adds a set of enhancements that makes all of this easier. JMP Live can now be integrated with your company's Active Directory so that access to JMP Live content can automatically reflect changes in your organization. You can now mention others when commenting on posts, making it much easier to pull others in to conversations about JMP content. You now have greater control over who can access the content in different JMP Live folders. And we have made many additional enhancements to make it easier to navigate around JMP Live and understand what has happened when things go wrong.  

      Please join us to see all of the enhancements the team has made.

      Presenter
      Room
      Whitworth 2
      Skill Level
  • 12:30-13:45

    • Semiconductor Industry Lunch Gathering

      Join us for lunch and a conversation around exploring data-driven insights and collaboration among those in the semiconductor industry. Limited Seating. RSVP here. 

    • Lunch

      Room
      Refuge Restaurant
  • 14:00-17:45

    • JMP Lab

      Participate in testing features, provide critical user feedback, and experience new innovations first-hand. You will have the opportunity to directly influence the development of JMP, helping to enhance functionality, ease-of-use, and the overall user experience.

      Room
      Oak Lounge & Boardroom
  • 14:00-15:00

    • JMP Roadmap Lightning Talks

      Learn about what’s new in JMP® 18 and what’s planned for the future from JMP product managers and developers. Topics: DOE, Reliability and Quality (Whitworth 1), Statistics and Modeling (Whitworth 2), Data Access, Integrations and Sharing Results (Oak Room)

      Room
      Whitworth 1, Whitworth 2 & Oak Room
  • 15:15-15:45

    • Discovery Expo

      Room
      Ballroom
  • 16:00-16:45

    • Using JMP Workflow Builder to Automate Analysis of an Annual Verification of a Spectrometric Method

      Pharmaceutical products are tested for a range of attributes, including identification, assay, and uniformity of content. Whilst HPLC is commonly used for analysis, it can be time-consuming and labour-extensive. As a result, our company has registered an additional Raman Spectrometric method for several products.

      As part of the license for these methods, an annual comparison with the reference method is required. For these comparisons, tablets or capsules are analysed by Raman before the same samples are analysed by HPLC. The comparisons between both tests are required to pass a range of acceptance criteria.

      Analysis of the results has previously been completed by one staff member; with all procedures and results recorded in a notebook. A second staff member checks the analysis before generating formal report. Whilst automation of the calculations had been previously discussed, a lack of scripting experience hindered implementation.

      The introduction of JMP Workflow Builder in JMP17 has enabled our company to automate the analysis without needing to learn to script, saving time and reducing potential errors. This presentation demonstrates the steps taken to generate a workflow, as well as presents the results via a dashboard.

      Room
      Oak Room
      Skill Level
    • A Case Study of JMP Live Integration with Python for Production Systems

      In manufacturing industries, a considerable amount of data is generated by the production systems. To extract the meaningful information from the data, statistical analyses are performed by different levels of specialized staff. Nevertheless, in the most cases, the reports are done manually, and they are static. JMP Live enables people to access the information directly from a web browser; it also includes dynamic features. Furthermore, it is easy to use even by people who are not digital natives or data literate.

      To integrate JMP Live with an existing static report, there are two options: rewrite the script with JSL or reuse the previous Python script. In this case study, to save resources, we present the reuse of Python code to integrate with JMP Live because of its ease of integraton.

      The previous static report was done manually on demand with Python, and the results were PDF reports sent via email. With the demand for analysis that's updated daily, JMP Live was chosen as a technical solution. To integrate the old code with JMP Live, the data is prepared using existing Python script, which writes data that's in a database that JMP Live can access anytime and update automatically.

    • Zinc Coating Excellence: Leveraging JMP for Accurate Pressure Control and Thermodynamic Modelling

      In the field of continuous thin sheet metal galvanizing for the automotive steel industry, precise control of the zinc layer is of great importance. At the heart of the continuous galvanizing process lies the zinc pot or bath, an essential element for ensuring smooth production while managing costs.

      This presentation explains the use of Clecim's air gap system (Dynawipe), which is essential for controlling and regulating the thickness and uniformity of the coating across the width and length of the thin steel strips. This process meets the ongoing challenge of optimizing corrosion protection and minimizing zinc consumption; it also fits with Industry4.0 tendency to record, analyze, and model the physical processes to improve customer performances.

      This presentation shows how JMP was used, step by step, to create a pressure map as a function of various parameters for comparison with finite element modeling. The main challenge we addressed was the precise determination of the maximum pressure peak. Our approach consists of recording the data, modelling the pressure using a Gaussian curve (Fit Curve platform), exporting the parameters, modelling them as a function of the process parameters using Fit Model, and then producing an animated chart using Graph Builder.

      Room
      Whitworth 2
      Skill Level
  • 17:00-17:45

    • Using Microbial Metabolic Profiles to Improve Scale-Down Model Qualification and Process Characterization Studies

      In biopharmaceutical process development, the characterization of critical process parameters (CPPs) and controllable process parameters is crucial. As in many industries, process development and characterization studies start at the laboratory scale and the production process is subsequently upscaled to the final production facility. JMP plays a central role as a tool to support these studies – from DOE to process modeling.

      The use of online sensor data is a potential source of information, which is currently underutilized in process development and characterization studies. In particular, the comparison of microbial metabolic profiles between the production scales is mostly performed exploratively. The statistical analysis of this data is an option to understand differences and can help to enable better process control strategies.

      In this talk, we explore:

      • How Functional Data Explorer in JMP Pro can give a better understanding of scale differences.
      • The current usage of JMP for process characterization studies at Lonza.
      • Utilization of Functional Data Explorer in JMP Pro to support process understanding and optimization
      Room
      Oak Room
      Skill Level
    • Should I Bring My Umbrella to Manchester? Analogy of an Industrial Process Monitoring

      At the heart of industrial processes, more and more data is being collected and stored. This data is the mirror of process behaviour, and identification of its content in real time is the key to success.

      Using a case study based on meteorological data, all the steps needed to maximize the impact of industrial data and decision making as close as possible to the process are addressed.

      The use of an API Rest, or database access, to obtain real-time information is therefore crucial. In this case, the HTTP Request is used to regularly upload the next seven-day weather forecast for Manchester. Then, after automatic data preparation from JSON Parsing to define specification limits, JMP Live reports are automatically generated. These reports can include interactive visuals or statistical monitoring tools, such as control charts. This SPC monitoring follows a set of constraints linked to the design space of weather acceptance and includes alarms if drifts occur. Via email, these alarms notify everyone concerned about the existence of drifts so that they can react accordingly. In this way, the entire SPC chain of data access, data preparation, reports, control chart updates, and live communication is fully automated within JMP Live.

      Room
      Whitworth 1
      Skill Level
    • Machine Learning Methods on Industrial Data

      Machine learning has revolutionized the way we approach data analysis and decision making across various industries. In this presentation, we specifically focus on its utility when integrated with JMP Pro, a powerful statistical and data visualization tool. We highlight various real study cases (with anonymized and modified data just to give the context) to show how we can unravel complex relationships within data sets, leading to insightful clustering, segmentation, and predictions. Machine learning techniques in JMP and JMP Pro also empower users to identify and optimise the most relevant variables, enhancing model performance and interpretability.  The main methods presented in this paper are:

      • Neural networks for prediction on several responses and desirability optimisation.
      • Regression trees, called Partition in JMP, to understand complex process issues.
      • Some hybrid models using both standard statistical methods and machine learning to blend the different methods.
      • Functional Data Explorer on sensor data from the high-tech industry for prediction.

      We also detail several JMP Pro features, including the model comparison.

  • 19:00-22:00

    • Off-Site Dinner

      18:15 - Gather for walk to dinner
      18:30 - Departure to dinner

      Room
      National Football Museum

7 Mar Thursday

  • 6:30-8:00

    • Breakfast

      Room
      Refuge Restaurant
  • 8:00-9:00

    • Information

      Room
      Ballroom Foyer
  • 8:00-8:45

    • Discovery Expo

      Room
      Ballroom
  • 9:00-9:45

    • Building a Global Community for Digital-First Approach to Innovation and Sustainability at Unilever

      Innovating more sustainable, higher-performing products is the foundation of our ambition for a Clean Future in Home Care at Unilever. Scaling up new technologies from laboratory to factory brings considerable and exciting challenges, so how do we approach innovation to deliver for our consumers and our planet?

      In Process Development, we believe the most value is created if DOE and modelling are established as key skills in every process engineer. This is why we have built a globally active community of practice through a 70:20:10 approach to digital upskilling, delivering impactful innovations through DOE, and modelling on high-value projects. Embracing a "digital mindset" has empowered engineers to deliver impact and value as individuals, developing deep technical expertise in new-generation technologies through structured data capture and statistical modelling. This approach has enabled the introduction of sustainable biosurfactants and low-CO2 formulations straight-to-factory, with cost and complexity reductions across supply chains. New efficient process routes, optimised through modelling, have resulted in double-digit million-euro savings and product performance improvements throughout our Home Care portfolio.

      From formulation to factory, our approach to process development is helping to deliver the Clean Future revolution through a digital approach to innovation.

      Room
      Oak Room
      Skill Level
    • Predicting Effluents from Glass Melting Process for Sustainable Zero-Waste

      Modelling manufacturing processes encounters challenges when measurements occur on different time scales. One such company produces glass containing SeO2, with SeO2's volatility at high melting temperatures, causing significant evaporation, which is captured by a dust filter. Complicating matters, SeO2-laden dust is toxic, posing disposal challenges. Predicting the SeO2 content within the dust enables it to be recycled back into the manufacturing process, promoting sustainable, circular, zero-waste production and cost reduction.

      The glass manufacturing process operates continuously, with one-minute sensor readings for input variables. However, SeO2 content measurement in the dust occurs at longer intervals, necessitating sufficient dust accumulation for homogenization. In this presentation, we demonstrate the development of a predictive model for SeO2 dust concentration using a limited data set. To ensure that process parameters are averaged over the duration of dust sampling, we employ Monte Carlo simulations, utilizing the variability of process parameters.

      With JMP Pro, we outline data collection and preparation methods, as well as the subsequent use of simulated data to construct predictive models for SeO2 concentration. We explore the potential applicability of this methodology to industries facing similar challenges, such as chemicals or biotechnology, where modelling processes with disparate time scales and uncertainties is common.

    • Advances in Using JMP and JMP Pro for Analysis of High Spatial Resolution Mass Spectrometry Images

      At the ill-fated 2020 Summit in Munich, which demonstrated the incredible pivoting ability of JMP Summit organizers, we showed our initial applications of JMP to the analysis of high spatial resolution (NanoSIMS) mass spectrometry images.

      In this talk, we give an update on our advances since that presentation. After quickly reviewing our most basic analytical procedures that make use of dynamic linking and Graph Builder, we move on to more advanced analyses that feature cluster analyses, and finally, to applications of JMP Pro Functional Data Explorer, demonstrating how it helps us with interpreting mass spectra.

      And now we can do a lot of these analyses much more quickly, thanks to the Workflow Builder recently introduced into JMP 17. We draw upon example data sets from cancer research (cancer tumor tissue) to the greening of soil fertilizers using bacteria. For each of these studies, we set the stage with a brief background as to their importance, and then the majority of the talk consists of a live demonstration of the steps that we take to arrive at the end result.

      Room
      Whitworth 2
      Skill Level
  • 9:00-12:00

    • JMP Lab

      Participate in testing features, provide critical user feedback, and experience new innovations first-hand. You will have the opportunity to directly influence the development of JMP, helping to enhance functionality, ease-of-use, and the overall user experience.

      Room
      Oak Lounge & Boardroom
  • 10:00-10:45

    • In-Process Optimization of a Drying Procedure Using Functional Data Explorer

      The process of drying a product in a fluid bed using non-conditioned air is complex and unpredictable, due in part to the variability of the incoming products. As a result, non-optimal settings were used too often, resulting in rework and a plethora of accompanying ails, including added costs, extra work, and increased pressure to meet production schedules. Attempts at modelling the drying time as a function of air and temperature conditions, with the aim of optimising the drying parameters, were only partially successful.

      However, a breakthrough came when Functional Data Explorer in JMP Pro was used to include the drying profile of the individual product within the first 15 minutes of the process in the model. Operators could then use the results to predict the minimum drying time and adjust machine settings accordingly. Using FDE to generate insight into how different parameters impact results in-process has been extremely valuable. This talk should be useful for anyone who measures process information by demonstrating how to deliver this information to operators so they can adjust machine settings midprocess to maximize outcomes.

    • What is the Real Impact of Your DOE Factors: Automating Impact Ratio Calculations with JSL

      Fujifilm Diosynth Biotechnologies is a contract development and manufacturing organisation (CDMO) that has a dedicated Process Characterisation department focused on performing process characterisation studies (PCS). The aim of PCS is to demonstrate that our customer processes are robust to changes in parameter settings or their normal operating ranges.

      PCS commonly employ design of experiments (DOE) to investigate the effects that process inputs have on quality attributes (QA) and process performance indicators (PPI). DOE analysis is a useful tool to identify the inputs that have an effect on the QA/PPIs, but it is mainly quantitative.

      In addition to the tradititional DOE analysis, calculation of the impact ratio (IR) for each input provides a quantitative and qualitative assessment and can aid in the assignment of a parameter as being critical or not. The IR provides a measurement of the effect size relevant to an acceptable range.

      Doing the calculations manually is time-consuming and prone to error. We will present an automation tool that can extract the required information from a DOE model and compute the IR. An interface allows the user to customise how the results are calculated.

    • How to Design and Analyse Experiments with Pass/Fail Responses

      It is not unusual for individuals unfamiliar with how to properly create and analyse pass/fail experiments to treat the data as if they were continuous. Incorrectly assuming binary responses can be handled in such a fashion can lead to disastrous results. Failing to consider an underlying model consistent with these types of data, it is easy to create a design with too few or too many runs, not knowing how to properly estimate the sample size needed to achieve a certain power. Analysing results as if they were continuous fails to consider the fundamental nature of the data and how it might affect model assumptions. Doing so might produce unrealistic results, such as probabilities below zero or above one.

      This session focuses on designing, evaluating, and analysing experiments for binary responses such as pass/fail. Using two of the most common functions for modelling binary data, the logit and probit, various functionality in JMP and JMP Pro is illustrated, including the use of simulation to estimate power and building prediction formulas. Analysis options for fitting these types of models will also be explored.

      Room
      Whitworth 2
      Skill Level
  • 11:00-11:30

    • Root Cause Searching on Yield Loss in the Automotive Industry by Multivariate Analysis and Modelling

      In the context of the semiconductor manufacturing industry for automotive, the yield is monitored at wafer level as a KPI in terms of cost, but also in terms of quality as the link between quality and yield link is proven.

      Here, a yield loss is observed at the electrical die-test step at hot temperature, and in particular, for a certain bin that is fitting with a group of tests of a specific component function. If a unit probing (UP) test, which the yield loss is observed for, highlights the failing dies, class probing (CP) tests the reticles, which are structures built between the dies and which monitor the manufacturing steps, before the dies were functional and may be tested. So searching for the root cause will target correlating the yield loss observed at UP with the CP tests to understand which manufacturing step is failing.

      Using JMP Pro, correlation analysis, multivariate analysis, and modelling are implemented on UP and CP data for the failing lots; the results provided some good clues so that the device engineers could design corrective actions.

    • Cause and Effect Diagram: The Hidden Champion for Visualizing Complex Structures

      Analytical programs need data in a strict row and column structure, and rows are usually treated as independent observations. The cause and effect diagram puts rows into a hierarchy, described by pairs of parent-child relationships. Originally intended to document brainstorming results in quality management, this platform is a powerful tool to visualize other structures as well.

      Generic data structures in JSL scripting are associative arrays and lists. Their elements can be values, as well as lists and associative arrays, providing a framework for quickly and efficiently managing complex data structures. Well-established in system communication, JSON interfaces are another example for hierarchical data structures. If items are treated in different ways and varying parameters are measured after treatment, the actual combinations can be documented in the same way.

      In this presentation, a JSL function is presented (and provided) as it crawls through a list or an array and puts all the content into a data table. The cause and effect diagram displays the content in a graphical way. Application examples illustrate the versatility and power of this concept.

      Room
      Ballroom Ped 2
      Skill Level
    • Definitive Screening Design and Advanced Predictive Modelling as Useful Tools in Product Development

      In the pharmaceutical development of tablets, most active substances are difficult to process or dissolve. There are also many process steps and functional components that need to be included to solve all the issues that appear along the way. To narrow the focus, it is important to recognize which of the many potential factors are the most important for the responses of interest. 

      Definitive screening designs are often considered to be most appropriate for experimentation with four or more factors. Whenever there are available results of experiments that are not part of a specific design, it is good to use tools such as advanced predictive modelling techniques to help capture valuable information. The aim of this project was to apply different analytical techniques to evaluate the effects of input factors on responses. Another goal was to find the balance between the factors that contribute to tablet appearance and mechanical resistance and the factors that enable quick active substance dissolution, which is important for product in-vivo performance. By using a combination of analytical tools, valuable insights were obtained regarding effect of formulation and process factors on tablet characteristics. Optimal settings were then defined to maximise dissolution.

    • Reduce Cost and Avoid Nonconformities Doing Smart Shelf Life Calculations in JMP

      Companies in the pharmaceutical industry must demonstrate shell life by measuring product performance over time at storage temperatures. To accelerate the test, it is often also done at elevated temperatures. Although Arrhenius demonstrated in 1889 how to combine results at different temperatures in to one model, many companies still analyze each temperature separately. It is not cost-efficient to stratify data into different models.

      In addition, ongoing verification of shelf life must be performed, where it is often enforced that all individual observations are inside specifications. Due to measurement noise, this criterion is often not met. Instead, it should be enforced that measurements are inside prediction intervals from the initial shelf life study, which is a weaker requirement.

      JMP has the Arrhenius Equation in the Degradation/Non-linear Path/Constant Rate platform. However, this platform lacks some of the excellent features of the Fit Model platform, such as studentized residuals plot, Box-Cox transformation, random factors, and prediction intervals.

      This presentation demonstrates how the Arrhenius Equation can be entered into the Fit Least Squares Platform by making a Taylor expansion with only four terms, as well as how a JMP workflow can ease the calculations.

      Presenter
      Room
      Ballroom Ped 4
      Skill Level
    • Automation Approach Using JMP Scripting Language

      Time, effectiveness, and efficiency are very important in today's world. 

      Due to the continuously increasing amount of data, it is also important to analyse the data in a time-saving, efficient, and effective manner and to ensure data traceability. For this reason, JMP was chosen as the preferred statistical analysis software and a JSL-based script was implemented to analyse the data during the technical performance studies (TPV) in assay development. TPV studies are submission relevant performance studies to claim product performance for several performance indicators. Up to 30 TPV studies are required during product development to proof product performance.

      A JMP data management and analysis script containing several built-in scripts was written and validated in JMP to visualise and calculate the data with corresponding JMP platforms and to generate outputs for submission-relevant documentation.

      The presentation shows how the script automates processes, thus helping the technicians with assay development during their daily work. After listing the advantages and disadvantages with this approach, it demonstrates how to attempt to compensate for the disadvantages, using a video clip  to show how the script works.

    • Accelerated At-line Amino Acid Analysis by Using JMP Add-in Feature from 908 Devices

      908 Devices released a JMP add-in tool that facilitates the direct analysis and trending of amino acid and vitamin concentrations generated at-line by the REBEL media analyzer. In media development and adjustment, various parameters are tested over time, which leads to a high number of samples and generated data. REBEL analyser, in combination with the JMP add-in function, allows data sets to be visualized immediately in a customized view. Alvotech presents a case study demonstrating how JMP enables a fast comparison of amino acid levels in different bioreactor runs with different media formulations, leading to improved process understanding.

      In a complex experimental setup, various media formulations were tested over 13 days of multiple bioreactor runs by analysing amino acids and vitamins concentrations at-line with REBEL in three different dilutions in duplicates to evaluate batch performances.

      The results of this large data set of all 21 amino acids and six vitamin levels were visualized with JMP in a simple way that still provided various setups for comparing data and determining measurement accuracy.

      Room
      Ballroom Ped 6
      Skill Level
    • Use of JMP for Restructuring and Analysis of DOE Data for Pigment Stability Optimisation

      A small DOE was prepared by our formulation development project team. The project required the examination of a large number of different, measured responses to investigate the optimisation of pigment stability in a new, experimental paint formulation. However, the team had difficulty obtaining clear guidance from their analysis of the results. Through the use of JMP’s data restructuring tools, it was possible to reformat the existing data into something that could be easily analysed using SLS and logistics modelling to give the key impacts on stability, as well as the probability that a given combination of factors would yield a satisfactory result.

      Room
      Ballroom Ped 7
      Skill Level
    • Consolidation and Integration of Data Workflows to Guarantee Manufacturing Process Robustness

      We needed to create and develop an internal tool to standardize analyses, ranging from database interrogation to final assessment of capability of the process and special causes investigation.

      The development started from a database with a simple structure (statistical lab checks) and gradually extended to bigger databases populated by online process measurements. 

      The goal was to have a simple and quick internal tool so that we could:

      • Create a standardized data set, which involved the customization of the data download by filtering and selecting the most significant parameters from a graphic user interface.
      • Automate the analysis, which would allow us to standardize process capability analysis and develop effective graphs to have a quick detection of anomalies, through proper usage of control charts and main KPI trends.
      • Deeply investigate the process and link the results of an automatic analysis, with a focus on potential correlation between product characteristics and process parameters, which would allow us to identify the significant variables and find room for the quality level improvement.
  • 11:30-12:00

    • A Mixture/Process Experimental Design and SVEM Analysis for an Esterification Reaction

      An experimental design was created to study the formation of an unwanted byproduct in an esterification reaction. Four mixture component factors plus one process-based factor were used to generate a 26-run space filling experimental matrix, specifically for analysis using Self-Validated Ensemble Modelling (SVEM). This approach was selected over a traditional mixture design intended for a polynomial Scheffe model. The resulting predictive model was an excellent fit to the data, clearly identifying the impact of each factor on the level of byproduct formed. This information was used to accelerate the development of a kinetic model and scale up the process.

      Room
      Ballroom Ped 2
      Skill Level
    • Using JMP to Build Soft Sensors for Efficient Monitoring of Chemical Production Processes

      Production processes are routinely sampled to determine the concentration of key components to meet safety, environmental, or quality criteria. The deployment of temperature-compensated density meters provides an opportunity for live process monitoring to replace offline sampling. Historic process data is not suitable for model building as offline analysis occurs sporadically, with uncertainty about the exact time of sampling and with limited variability in results. 


      A naïve approach of varying the temperature and concentration before measuring the density (in the laboratory) leads to inflated errors (since concentration is the desired prediction). The method has merits since it only involves solvent addition and temperature control, which can be automated. We show how this model can be used as a first pass, to target evenly spaced temperatures and densities, followed by sampling to determine the concentration, to produce a model with much lower prediction uncertainties.


      Exporting the model to Seeq and PowerBI enabled continuous monitoring and decreased costs from daily sampling by €15,000 per year (for a single process). The implementation removed delays waiting on the offline analysis, reduced the risk of operator exposure to process chemicals, and enabled the production team to predict and plan interventions, thus increasing operational time.

      Room
      Ballroom Ped 3
      Skill Level
    • Utilising HTPD and DOE to Optimise the Pharmaron Gene Therapy Platform

      Pharmaron has developed a platform process to generate adeno-associated viruses (AAV) gene therapies with a highly adaptive toolbox to manage varying AAV products and serotypes. Our toolbox can rapidly assess a product's compatibility with our platform through a manufacturing feasibility assessment and finely tune a number of parameters for targeted process optimisation.

      One essential tool for Pharmaron’s approach to optimisation is DOE (design of experiments). We show how a central composite DOE approach can maximise the recovery of monomeric AAV by identifying the optimal residence time, loading density, and load pH for the initial AAV purification process's capture step. The optimal loading conditions were measured using titre by Capsid ELISA and multi-angle dynamic light scattering (MADLS) and monomer percentage by DLS. DOE analysis showed a strong link between loading density and monomer content, whereby a higher loading density resulted in a higher yield of monomeric virus. Load pH and residence time had negligible effects on recovery and monomericity.

      Since it facilitates the analysis of multiple parameters in a fraction of the time, DOE has enabled Pharmaron to rapidly identify the optimal conditions for affinity capture. It significantly improves process performance and drives generation of a highly pure, monomeric virus.

      Room
      Ballroom Ped 4
      Skill Level
    • Simulation Study of Process History Impact on Intensified Design of Experiment Regression Models

      My research aims to enhance the efficiency of early-stage process development with mammalian cells in the biopharmaceutical industry by applying an intensified design of experiment (iDoE) approach. Unlike classical design of experiment, iDoE involves intra-experimental variations of critical process parameters (CPPs). This approach not only increases data-generation efficiency but also enables the consideration of temporal process dynamics through stage-wise optimization. However, a potential limitation is that previous CPP settings may (irreversibly) impact the cells and affect their response behavior to subsequent CPP set points.

      To address this issue, my research focuses on developing guidelines for planning and evaluating iDoEs robustly, considering the impact of process history. The focus of the presented simulation study is to investigate the impact that different effect sizes of interaction terms associated with the process history have on our regression models. Subsequently, the beta estimates and variance components of these models are compared to evaluate the impact of not explicitly considering the process history. This research has the potential to significantly impact the biopharmaceutical industry by innovating the way process optimization in early-stage development is performed, considering the dynamic nature of these processes.

    • Tonisity: Our Journey from Excel to JMP Automations

      We started using JMP four years ago. We were tired of Excel’s bulkiness, and we loved JMP’s drag-and-drop features and how easy it is to learn. However, we have come a long way since then, currently using JMP for statistics and, more importantly, for automations and product QC.

      This meta-analysis is just one example of the time and effort we have saved by using JMP. We receive experimental data from client farms on the impacts of our products on piglet survival/mortality. This meta analysis started off five years ago as a three-to-five week, Excel-intensive process. Over time, it morphed into a mainly automated JMP process that took a week or less. This time saving means we have more time for “value-added” process steps, such as presenting the data to our clients and writing abstracts for conferences. It also means that we can spend more time querying the data for information nuggets, which allows us to come up with “something new” every time we present. JMP really taught us the value of following process maps and the impact the automations can have on how we view data.

      Room
      Ballroom Ped 6
      Skill Level
    • Mixture Design in Aerospace

      Aerospace grade formulations are often composed of several ingredients whose ratios and interactions will impact one or more properties of the final component. Theory and experience can help with the design of these formulations, but sometimes there are interactions or synergies that have not been discovered yet. Therefore, it can be useful to explore a wide experimental space to discover the unexpected.

      In this presentation, I share the results and insights obtained after running a mixture design, including how to visualize, normalize, and analyse the data. I also discuss ternary plots, how to communicate technical information to a nontechnical audience, the challenges encountered, and what could have been done better.

      Room
      Ballroom Ped 7
      Skill Level
    • QbD in Clinical Trials: Industry Status, Challenges, ICH Guidelines, and Statistical Considerations

      Recent updates to Good Clinical Practices (GCP) Guidelines (ICH E6[R2]) have promoted the concept of applying Quality by Design principles to the design, analysis, and monitoring of clinical trials in the pharmaceutical industry.  The three key aspects of this approach are define, monitor, and report. 

      JMP products have been used in the pharma industry for many years to help apply QbD to nonclinical development and manufacturing; JMP and JMP Clinical are well-suited to do the same type of analysis for clinical trials. 

      In this talk, we describe the GCP QbD framework and show an example of how JMP and JMP Clinical can be used to monitor clinical trials using this framework.

      Room
      Ballroom Ped 8
      Skill Level
  • 12:00-12:30

    • Exploiting JMP Pro to Model Outlier Distributions in Semiconductor Process Development

      Outliers are often removed when modelling. However, detecting outliers is the essence of statistical process control. Understanding how they are created is an opportunity for quality improvement.  

      Outliers are defined as "unusual" observations and require a convention, such as the three-sigma rule or another more subjective criterion. In this presentation, we demonstrate three approaches to modelling DOE in semiconductor process development, the goal of which was to understand the mechanism that generates outliers:

      • The first approach takes experimental data, uses a rule to categorize observations as outliers, and then uses logistic/Poisson regression to model the rate they are generated.
      • The second uses the Functional Data Explorer in JMP Pro to model the inverse empirical cumulative density function so one can see which combinations of factors cause or prevent outlier generation in a semiconductor manufacturing process.  
      • The third approach uses the nonlinear platform to model the data with a t-distribution so one can see the outlier distribution, as well as detect shifts in the process mean.  

      We discuss how the three approaches differ in terms of the quality of the information they supply and the difficulty of the analyses.

    • Integrating JMP Data Exploration and Python Machine Learning Capabilities

      The quality of biopharmaceuticals is influenced by many factors. Understanding these factors is a prerequisite for delivering the highest quality products to the patients and for complying with regulatory requirements. JMP is straightforward to use for data exploration. However, correlations within or between larger data sets are frequently complex, requiring tools not available in the standard JMP environment.

      On the other hand, Phyton's machine learning capabilities offer solutions for almost every data science question. Unfortunately, using Python program code is often difficult for many subject matter experts. To address this tension, we show how to incorporate Python for machine learning questions as an alternative for extended JMP Pro features.

      We use JMP journals to guide the user through a machine learning workflow. Data visualization steps such as data interpolation and other measures to clean up data are performed with JMP. The application of machine learning algorithms and the validation are performed with Python using Scikit-learn pipelines. Finally, the visualization of the results is again done with JMP to allow the user a simple adaptation of the plots.

      We show how this tool can be used to identify the factors with the largest impact on a certain output parameter.

      Room
      Ballroom Ped 2
      Skill Level
    • GRR Analysis of Effects from Measurement Queue Time on SiO2 Thin Film Thickness

      SiO2 thin film has been widely used as STI liner, gate oxide, spacer, etc., in the semiconductor industry. The thickness of SiO2 layers is strictly controlled and is affected by facilities, chambers, and measurements . Among these factors, thickness is directly susceptible to measurements. If measurement queue time is too long, true thickness of  the SiO2 layer formed from thermal process may be distorted as thickness may increase naturally in the atmosphere.

      To analyse effects from queue time and measurements on SiO2 thickness, JMP GRR analysis was introduced. After defining the operation, a cause-and-effect diagram is used summarize possible factors for thickness shifts. Next, thickness from coupons is collected, based on JMP MSA design platform. The thickness of each coupon is measured multiple times as repeatability tests and degradation tests, with the same repeatability tests conducted every three hours as reproducibility tests. Once the variability in thickness from repeatability and reproducibility is analysed using Xbar and S charts, GRR analysis is performed to evaluate current GRR performance. Finally, relationships between P/T ratios, alpha/beta risks, and spec tolerance, regression models between thickness and queue time are built to determine if the measured thickness is to be trusted.

      Room
      Ballroom Ped 3
      Skill Level
    • Dynamic JMP Dashboard for Optimising Tool Maintenance in Semiconductor Processes

      In the semiconductor industry, because of ongoing customer demand for lower cost devices, tool log data analysis is important for efficient tool usage. Deploying Tech Enabled services with JMP® (SAS institute) visualization tools allow us to become more efficient in responding to maintenance events. Analyzing the process runs using JMP distribution, histogram, and boxplot options helps to focus on the problem areas and reduce the maintenance duration. Wilcoxon non-parametric test is applied to perform hypothesis study on the tool down duration to check variation with respect to target and to determine the confidence interval for maintenance events. JMP quality and control Pareto plot and Ishikawa cause and effect diagram is implemented for root cause analysis and action plans. Dynamic JMP dashboard displaying box plot along with the above performance tests facilitated for better planning of maintenance activities and assigning priority. Dependency of PM success and failure on PM types were reported by quick visualization from JMP Dashboard.

    • Reviewing the Process Parameters Data to Identify the Correct Hardware Design

      By using an atomic layer coating on active pharmaceutical ingredient (API) particles, its surface can be tailored and can enhance the performance of API particles. The hardware team at Applied has designed an in-house tool to do coating on these high surface area particles.

      As experimentalists, our main role in the project is optimisation of the recipe and, based on the data obtained, provide feedback to the hardware team for better design. In this presentation, we discuss two sets of process data obtained from different designs of one of the hardware parts. The objective is to compare the two data set and find the design with minimum variation. For this, we use such tools in JMP as histograms and continuous fit to see the distribution of process parameter and distribution type for both cases. A normal quantile plot is used to check for any variation in the normality of the data set. Outlier analysis is done using Explore Outlier in JMP, allowing one-way analysis for data comparison. With the data obtained, the best option is chosen and proposed for upgrade. Exploring the outliers helps to determine the variation in the data set.

    • Mastering the Art of Scripting Seamless JMP-Like User Interfaces with Structured JSL

      I create a lot of user-interfaces using JSL, some simple and some much more complicated, but most do follow similar principles: the scripts use similar templates, they have similar layout on the launcher, reports look similar, etc.

      In this presentation, I show how I use JSL to script JMP-like user interfaces, which type of templates I tend to use, and the decisions I make while developing such platforms. I also demonstrate different display boxes, how to create a user interface with JSL, script templates, the decision process I use when scripting user interfaces, and some tips and best practices for JMP scripting.

      Room
      Ballroom Ped 7
      Skill Level
  • 12:30-13:45

    • Affinity Group for Women in Science and Engineering Luncheon

      Join us for this gathering as we prepare to celebrate International Women’s Day, where we’ll welcome special guests Lucy Cooke and Julia O’Neill. Limited Seating.  RSVP here.

    • Lunch

      Room
      Refuge Restaurant
  • 14:00-14:45

    • JMP Lab

      Participate in testing features, provide critical user feedback, and experience new innovations first-hand. You will have the opportunity to directly influence the development of JMP, helping to enhance functionality, ease-of-use, and the overall user experience.

      Room
      Oak Lounge & Boardroom
    • Tolerating Input Variation: Analyses and Predictions in the Most Variable Systems for Hobby Athletes

      A recurring challenge exists across the process industries, namely, how to integrate the data from ever-more complex sensors with the domain knowledge of what works in that process, borne out of vast experience.

      Interestingly, a similar problem faces endurance sports, where a dizzying array of data can now be easily collected. Unfortunately, this has led to a situation in which the athlete is presented with masses of data but not necessarily the tools to understand what it is telling them about their performance.

      In this talk, we explore how:

      • JMP can simplify the preparation and analysis of training data, providing a route to testing long-held beliefs about what works.
      • How JMP can streamline the exploration of time series curved data, allowing us to see the whole picture.
      • How DOE can provide a convenient approach to testing a range of possible improvements.

      Armed with this knowledge, we can now answer the question, “How best to operate this complex system?”

       

      Room
      Oak Room
      Skill Level
    • Dynamic DOE: A Novel Methodology for Time-Dependent Design of Experiments in Chemical Development

      In this presentation, we introduce a novel design of experiments (DOE) methodology developed in-house. Named "Dynamic DOE," it is specifically tailored for time-dependent DOEs in chemical development using kinetic reaction data. The development of this innovative approach addresses the challenges faced in traditional DOE methods when dealing with time-sensitive chemical processes.

      We present benchmark data comparing different DOE designs and their performance in combination with various regression techniques. This comprehensive analysis demonstrates the advantages of the Dynamic DOE methodology in terms of accuracy, efficiency, and adaptability.

      Furthermore, we showcase real-life application examples from late-stage chemical development at Boehringer Ingelheim. These case studies illustrate the successful implementation of the Dynamic DOE technique in combination with high-throughput automated lab reactors, highlighting its practical benefits and potential for widespread adoption in the industry.

      Join us to learn more about chemical development advancements through the Dynamic DOE methodology, an innovative technique that seeks to change the way we utilize time-dependent experiments in the field.

    • Utilising DOE and the Prediction Profiler for Creating Sustainable Formulations

      Design of experiments (DOE) has always had an intrinsic contribution toward sustainability. Simply by minimising the number of experiments to reach the target desired, significant savings in resources can be obtained. 

      However, it is not only about using DOE, but also combining it with the Prediction and Contour Profilers. These profilers enable scientists and engineers to reach optimal products and processes, generating some “secondary” contribution to sustainability. For example, by making a process more efficient, it's possible to generate less waste and/or spend less energy. Achieving a better or more efficient product could save resources in the application of that product or enhance its lifetime by again generating less waste.

      In this paper, we show how at Johnson Matthey, a global leader in sustainable technologies, we consider sustainability, not only in relation to the application of our products but from the very beginning at their formulation. We explain how we use DOE and the profilers to enable us to formulate taking not only performance into account but also trying to minimise the footprint of the formulation itself, and therefore including sustainability in multiple aspects of the product.  

  • 15:00-16:15 Plenary

    • Diversity is the Key to Drive Evolution Forward, Revealing Unexpected Truths

      Lucy Cooke is a National Geographic explorer, New York Times best-selling author and award-winning broadcaster with a Masters in zoology from New College Oxford, where she studied under Richard Dawkins. Cooke has written and presented natural history documentaries for BBC, National Geographic, ITV and Discovery. She’s also a regular on Radio 4 panel shows like “Sue Perkins Nature Table” and “Infinite Monkey Cage” as well as hosting her own “Political Animals” and “Power of…” series. Her first book, “The Unexpected Truth About Animals,” was short-listed for the prestigious Royal Society prize and has been translated into 18 languages. Her latest bestseller, “Bitch: What it means to be female,” was cited as one of the best books of 2022 by the Guardian and the Telegraph.

      Room
      Ballroom
  • 16:15-17:30

    • Discovery Expo & Meet the Author

      Room
      Ballroom
  • 18:30-21:30

    • Dinner Mingle

      Room
      The Vault
Times subject to change