Choose Language
Hide Translation Bar
--Select Language--
(English) English
(Français) French
(Deutsch) German
(Italiano) Italian
(日本語) Japanese
(한국어) Korean
(简体中文) Simplified Chinese
(Español) Spanish
(繁體中文) Traditional Chinese
JMP User Community
:
JMP Discovery Summit Series
:
Abstracts
All community
This category
Events
Knowledge base
Users
Products
cancel
Turn on suggestions
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Show
only
|
Search instead for
Did you mean:
Sign In
Sign In
JMP.com
User Community
Online Documentation
My JMP
JMP Store
JMP Marketplace
Discovery Summits
Discussions
Learn JMP
Support
JMP Blogs
File Exchange
Add-Ins
Scripts
Sample Data
JMP Wish List
Community
About the Community
JSL Cookbook
JMP Technical Resources
JMP Users Groups
Interest Groups
JMP Discovery Summit Series
JMP Software Administrators
Options
Add Events to Calendar
Mark all as New
Mark all as Read
Showing events with label
Consumer and Market Research
.
Show all events
«
Previous
1
2
3
…
18
Next
»
0 attendees
0
0
JMP 15: Instant Graphical Gratification - John Sall
Monday, March 9, 2020
JMP 15 and JMP Pro 15 give you more options for exploring and analyzing your data interactively. From enhanced data preparation and modeling to instant graphical gratification, you’ll discover new ways to access and understand your data. You’ll also find a new way to publish, share and communicate those findings. As part of this presentation, John Sall shows how to use JMP to explore, analyze and understand patterns related to the current Coronavirus worldwide outbreak. Download the COVID-19 data from John Hopkins University at GitHub to use with this journal.
Labels
(7)
Labels:
Labels:
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Data Blending and Cleanup
Quality and Process Engineering
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Production Line Control in Semiconductor High-Volume Manufacturing (2020-EU-45MP-472)
Monday, March 9, 2020
Guillaume BUGNON, Data Engineer Soitec Stéphane ASTIE, Yield Manager Soitec Semiconductor companies are continuously challenged on manufacturing cost and quality. An effective yield management system is key to detect line drift and minimize scrap to reduce wafer cost and quality impacts. JMP has helped Soitec to improve yield since 2007 by providing an easy-to-use tool to visualize equipment data versus product yield data. Methods for performing a set of statistical tests adapted to the nature of the response distribution (normal, non-normal, binary) were developed in 2010 for the yield team to find the root cause of the issue (Yield Guard dedicated JMP/JSL application). In multiple products and a full fab environment, data quality and real-time analysis are critical for accurate drift root cause determination. Agile collaboration between IT and yield teams allows us to develop an automatic tool to reach those goals in order to control our production line. We would be proud to present our methodology from data collection to data analysis. The presentation will be done interactively with JMP and JSL codes.
Labels
(12)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Blending and Cleanup
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Introduction to JMP(R) Live: Use and Administration (2020-EU-TUT-529)
Monday, March 9, 2020
Brian Corcoran, JMP Director of Research and Development, SAS Dieter Pisot, JMP Principal Application Developer, SAS Eric Hill, JMP Distinguished Software Developer, SAS You know the value of sharing insights as they emerge. JMP Live — the newest member of the JMP product family — reconceptualizes sharing by taking the robust statistics and visualizations in JMP and extending them to the web, privately and securely. If you'd like a more iterative, dynamic and inclusive path to showing your data and making discoveries, join us. We'll answer the following questions: What is JMP Live? How do I use it? How do I manage it? For background information on the product, see this video from Discovery Summit Tucson 2019 and the JMP Live product page. JMP Live Overview (for Users and Managers) – Eric Hill What is JMP Live? Why use JMP Live? Interactive publish and replace What happens behind the scenes when you publish Groups - From a user perspective Scripted publishing Stored credentials API Key Replacing reports Setup and Maintenance (for JMP Live Administrators) – Dieter Pisot Administering users and groups Limiting publishing Setting up JMP Live Windows services .env files Upgrading Applying a new license Using Keycloak single sign-on Installing and Setting up the Server (for IT Administrators) – Brian Corcoran Choosing architectural configurations based on expected usage Understanding SSL Certificates and their importance Installing the JMP Live Database component Installing the JMP Pro and JMP Live components on a separate server Connecting JMP Live to the database Testing installed configuration to make sure it is working properly (view in My Videos) (view in My Videos) (view in My Videos)
Labels
(11)
Labels:
Labels:
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Data Blending and Cleanup
Data Exploration and Visualization
Mass Customization
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
A Study of Common Extracurricular Activities Related to Undergraduate Seminars in Japan (2020-EU-EPO-493)
Monday, March 9, 2020
Wakako Fushikida, Associate Professor, Tokyo Metropolitan University I conducted a questionnaire survey for 130 faculty members to reveal extra-curricular activities related to undergraduate seminars affectionately called “Zemi”. To the question “what kind of following extra-curricular activities did you offer students out of the seminar?”, the faculty members were required to answer 11 items on a 5-point scale. First, after confirming the status of the items with extreme bias from histograms and frequency distribution tables using “univariate distribution”, the following three items were selected as common extra-curricular activities: “Drinking party (average =3.39, S.D. =1.20)”, “Camp (average =2.68, S.D. =1.64)” , and “Sub-zemi (average =2.54, S.D. =1.45)”. Second, relationship between the three items’ scores were visualized based on faculty members’ research fields using “three-dimensional scatter plot”. The figure shows that the frequency of “Camp” and “Sub-zemi” in social sciences are higher than the other fields. Finally, the results of a regression analysis indicate that undergraduate seminars in social sciences and recognition of extra-curricular activities’ importance by faculty members affect the frequency of the common extra-curricular activities in undergraduate seminars (F (6, 105) = 3.82, p = 0.00).
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
What Makes a Car Detailing Job Great? Adaptive Multi-Stage Customer DOE (2020-EU-EPO-486)
Monday, March 9, 2020
Zhiwu Liang, Principal Scientist, Procter & Gamble Pablo Moreno Pelaez, Group Scientist, Procter & Gamble Car detailing is a tough job. Transforming a car from a muddy, rusty, full of pet fur box-on-wheels into a like-new clean and shiny ride takes a lot of time, specialized products and a skilled detailer. But…what does the customer really appreciate on such a detailed car cleaning and restoring job? Are shiny rims most important for satisfaction? Interior smell? A shiny waxed hood? It is critical for a car detailer business to know the answers to these questions to optimize the time spent per car, the products used, and the level of detailing needed at each point of the process. With the objective of maximizing customer satisfaction and optimizing the resources used, we designed a multi-stage customer design of experiments. We identified the key vectors of satisfaction (or failure), defined the levels for those and approached the actual customer testing in adaptive phases, augmenting the design in each of them. This poster will take you through the thinking, designs, iterations and results of this project. What makes customers come back to their car detailer? Come see the poster and find out! (view in My Videos) View more... (Highlight to read) Speaker Transcript Zhiwu Liang Hello, everyone. I'm Zhiwu Liang statistician for Brussels Innovation Center for Procter and Gamble company I'm I'm working For the r&d department. Hello. Pablo Moreno Pelaez Yep. So I'm Pablo Moreno Pelaez I'm working right now in Singapore in the r&d Department for Procter and Gamble's So we wanted to introduce to you this poster where we want to share a case study in which we wanted to figure out what makes a car detailing your grade. So as you know, Procter and Gamble, the very famous company about job detailing for cars. No, just a joke. So we had to anonymize or what have they done. So this is the way We wanted to share this case study, putting it in the context of a car detailing job and what we wanted to figure out here is what were the key customer satisfaction factors for which we then Build interactive design that we then tested with some of those customers to figure out how to build the model and how to optimize Job detailing for the car. So how do we minimize the use of some of our ingredients. How do we minimize the time we take for some of the tasks that it takes to do the job details. So if you go to the next slide. And the first thing that we went to, to take a look. Yes. Okay, what are the different vectors that a customer we look at when they they take the car to get detail and to get clean and shiny and go back home with a buddy. A brand new car. What are they looking at clean attributes, they're looking at Shane attributes and they are looking at the freshness of the guy. From a culinary view that we looked at the exterior cleaning the cleaning of the rooms are the king of the interior The shine of the overall body, the rooms that windows and of course the overall freshness of the interior And then we'll wanted to build this by modifying these attributes in different ways and combining the different finishes that it a potential Car detailing job would give you wanted to estimate and be able to build the model to calculate what the overall satisfaction. And also what the satisfaction with a cleaning and what their satisfaction with the shine. Would be modifying those different vectors. These will allow us in the future to use the model. To estimate. Okay, can we reduce the time that we spend on the rooms, because it's not important, or can we reduce the time that we spend on the interior or reduce the amount of products that we use for freshness. If those are not important. So really, to then optimize how do we spend the resources on delivering the the car detailing jobs. So in the next slide. You can see a little bit with the faces of the study where Zhiwu Liang Yeah, so as popular as sad as the cart. The planning job company. We are very focused on the consumer satisfaction. So for this particular job. What we have to do is identify what is the key factors which drive the consumer overall satisfaction and clean and shine satisfaction. So in order to do that we separate or our study design and Data collection experiments industry step. First, we do the Pilar, which is designed to five different scenario. Okay, using the fire cars. To set up the different level of each offer factors as a moment. We said, have to all of these five factor previous public described in the to level one is low and not as high. Then we recruit the 20 consumers to evaluate all of the five cards in a different order. The main objective for this Pilar is check the methodology and track the If the question we asked consumers consumers understand and provide the correct answer, and also define the proper range of each factor. So after that, we go to the phase one, which is the extend our design space by seven factors. Okay. Some factors to keep, low, high level as we do the Pilar. Some extent to the low, medium, high because we think is more relevant to the consumer by including the more level in our factor. And since we got more factor and from the customer design point of view, you will generate more Experiments runs in our study so totally we have an it runs of the cost setting and each of the panelists. We ask them to Evaluate still five but using the different order or different combination therefore accepted the custom design. When the consumer need to evaluate Five out of the 90 I said him. We have to using the balance in company blog design technique and to use 120 customers, each of them evaluate five cars. So totally this 120 customer data we collect we run the model identify what is the main effect. Okay, and what is the interaction in our model. Then through that we three hours. Not important factor and go to the face to face to using the funnel identified the six factors and for course adding more level for some factor because we saying that low is not low enough in the faith from phase one study and middle it's not Really matched to our consumer satisfaction. So we had some level of quality lol some factor level for the middle, high Inserting currently design space, then The face to design experiments his argument from the phase one. Was that we get a different okay setting for the 90 different cars, then asked 120 consumer evaluate five in a different camp one Through that we can remove non we can identify okay what is, what is the past. Factor setting which have the optimal solution for the consumer satisfaction and clean and shine satisfaction. So as you can see here We run the 3D model using our Six factors setting. Which each of them has played some role for the consumer satisfaction intense or cleaning as shine satisfaction. For the overall. Clearly, we can see the cleaning ring and shine window cleaning in interior is the key driver for the overall satisfaction. So if consumers in the ring clean and window shine. Normally, either we all agree, he was satisfied for the For our car detailing job and also we identified significant interaction. Exterior clean and intuitive clean these two things combined together has different rate to the overall satisfaction with a clean satisfaction and the shine satisfaction model. We identified very close, very significantly impact factors for clean Clearly, all of the clean factor relate to the clean satisfaction and for shine also all of the shine one relate to the shine satisfaction. But still the different perspective lighter clean his focus on the ring and shine is focused on the window. So from validating, we can have the better setting for all the car. Relief factor which helpers to divide the new projects which achieved the best consumer satisfaction based on all of the Factors setting. I think Speaker Transcript Zhiwu Liang Hello, everyone. I'm Zhiwu Liang statistician for Brussels Innovation Center for Procter and Gamble company I'm I'm working For the r&d department. Hello. Pablo Moreno Pelaez Yep. So I'm Pablo Moreno Pelaez I'm working right now in Singapore in the r&d Department for Procter and Gamble's So we wanted to introduce to you this poster where we want to share a case study in which we wanted to figure out what makes a car detailing your grade. So as you know, Procter and Gamble, the very famous company about job detailing for cars. No, just a joke. So we had to anonymize or what have they done. So this is the way We wanted to share this case study, putting it in the context of a car detailing job and what we wanted to figure out here is what were the key customer satisfaction factors for which we then Build interactive design that we then tested with some of those customers to figure out how to build the model and how to optimize Job detailing for the car. So how do we minimize the use of some of our ingredients. How do we minimize the time we take for some of the tasks that it takes to do the job details. So if you go to the next slide. And the first thing that we went to, to take a look. Yes. Okay, what are the different vectors that a customer we look at when they they take the car to get detail and to get clean and shiny and go back home with a buddy. A brand new car. What are they looking at clean attributes, they're looking at Shane attributes and they are looking at the freshness of the guy. From a culinary view that we looked at the exterior cleaning the cleaning of the rooms are the king of the interior The shine of the overall body, the rooms that windows and of course the overall freshness of the interior And then we'll wanted to build this by modifying these attributes in different ways and combining the different finishes that it a potential Car detailing job would give you wanted to estimate and be able to build the model to calculate what the overall satisfaction. And also what the satisfaction with a cleaning and what their satisfaction with the shine. Would be modifying those different vectors. These will allow us in the future to use the model. To estimate. Okay, can we reduce the time that we spend on the rooms, because it's not important, or can we reduce the time that we spend on the interior or reduce the amount of products that we use for freshness. If those are not important. So really, to then optimize how do we spend the resources on delivering the the car detailing jobs. So in the next slide. You can see a little bit with the faces of the study where Zhiwu Liang Yeah, so as popular as sad as the cart. The planning job company. We are very focused on the consumer satisfaction. So for this particular job. What we have to do is identify what is the key factors which drive the consumer overall satisfaction and clean and shine satisfaction. So in order to do that we separate or our study design and Data collection experiments industry step. First, we do the Pilar, which is designed to five different scenario. Okay, using the fire cars. To set up the different level of each offer factors as a moment. We said, have to all of these five factor previous public described in the to level one is low and not as high. Then we recruit the 20 consumers to evaluate all of the five cards in a different order. The main objective for this Pilar is check the methodology and track the If the question we asked consumers consumers understand and provide the correct answer, and also define the proper range of each factor. So after that, we go to the phase one, which is the extend our design space by seven factors. Okay. Some factors to keep, low, high level as we do the Pilar. Some extent to the low, medium, high because we think is more relevant to the consumer by including the more level in our factor. And since we got more factor and from the customer design point of view, you will generate more Experiments runs in our study so totally we have an it runs of the cost setting and each of the panelists. We ask them to Evaluate still five but using the different order or different combination therefore accepted the custom design. When the consumer need to evaluate Five out of the 90 I said him. We have to using the balance in company blog design technique and to use 120 customers, each of them evaluate five cars. So totally this 120 customer data we collect we run the model identify what is the main effect. Okay, and what is the interaction in our model. Then through that we three hours. Not important factor and go to the face to face to using the funnel identified the six factors and for course adding more level for some factor because we saying that low is not low enough in the faith from phase one study and middle it's not Really matched to our consumer satisfaction. So we had some level of quality lol some factor level for the middle, high Inserting currently design space, then The face to design experiments his argument from the phase one. Was that we get a different okay setting for the 90 different cars, then asked 120 consumer evaluate five in a different camp one Through that we can remove non we can identify okay what is, what is the past. Factor setting which have the optimal solution for the consumer satisfaction and clean and shine satisfaction. So as you can see here We run the 3D model using our Six factors setting. Which each of them has played some role for the consumer satisfaction intense or cleaning as shine satisfaction. For the overall. Clearly, we can see the cleaning ring and shine window cleaning in interior is the key driver for the overall satisfaction. So if consumers in the ring clean and window shine. Normally, either we all agree, he was satisfied for the For our car detailing job and also we identified significant interaction. Exterior clean and intuitive clean these two things combined together has different rate to the overall satisfaction with a clean satisfaction and the shine satisfaction model. We identified very close, very significantly impact factors for clean Clearly, all of the clean factor relate to the clean satisfaction and for shine also all of the shine one relate to the shine satisfaction. But still the different perspective lighter clean his focus on the ring and shine is focused on the window. So from validating, we can have the better setting for all the car. Relief factor which helpers to divide the new projects which achieved the best consumer satisfaction based on all of the Factors setting. I think
Labels
(12)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Blending and Cleanup
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Industrial Biotech Case Study: Customized DOE for Process Optimization in Small-Scale Bioreactors (2020-EU-EPO-473)
Monday, March 9, 2020
Andreas Trautmann, R&D Scientist, Lonza AG Claire Baril, Scientist USP Development, Lonza AG Design of Experiments (DOE) is a frequently used and time-saving tool in industrial biotechnology for optimizing microbial cultivation process parameters. Developing standard approaches for such experiments facilitate the transferability to customers as well as the comparability between similar projects. In this case study, we show a general approach for the optimization of process parameters in small-scale bioreactors including a subsequent DOE model validation in pre-pilot scale. The software tool Custom Design in JMP was applied to generate an I-optimal response surface design with three center points. In total four continuous factors were chosen based on empirical knowledge and preliminary data from disposition runs. One major goal of DoE approaches in industrial biotechnology is to increase yield, e.g. concentration or titer, of the final product. By applying the Custom Design in JMP we were able to increase product titer significantly. The generated model was able to accurately predict the output variable within the characterized range, even though three out of the twenty-four experimental runs were not successful. In addition, the JMP Data Exploration tools enabled a fast evaluation of the data quality as well as time-dependent factor correlations, which contributed to enhancing the DOE model. (view in My Videos)
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Production Line Control in Semiconductor High-Volume Manufacturing (2020-EU-45MP-472)
Monday, March 9, 2020
Bugnon Guillaume, Data Management Engineer, SOITEC Baptiste Follet, Yield Engineer, Soitec Stephane Astie, Yield Manager, SOITEC Semiconductor companies are continuously challenged on manufacturing cost and quality. An effective yield management system is key to detect line drift and minimize scrap to reduce wafer cost and quality impacts. JMP has helped Soitec to improve yield since 2007 by providing an easy-to-use tool to visualize equipment data versus product yield data. Methods for performing a set of statistical tests adapted to the nature of the response distribution (normal, non-normal, binary) were developed in 2010 for the yield team to find the root cause of the issue (Yield Guard dedicated JMP/JSL application). In multiple products and a full fab environment, data quality and real time analysis are critical for accurate drift root cause determination. Agile collaboration between IT and yield teams allows us to develop an automatic tool to reach those goals in order to control our production line. We would be proud to present our methodology from data collection to data analysis. The presentation will be done interactively with JMP and JSL codes.
Labels
(12)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Blending and Cleanup
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Biological Products and Stability : Linear and Nonlinear Modeling (2020-EU-30MP-471)
Monday, March 9, 2020
Thomas Brisset, Stability Platform Manager, Stallergenes Greer Stability studies are a key part of pharmaceutical product development. They help justify shelf and storage conditions. By using a stability data modeling approach, the laboratory can characterize its product and perform shelf life extrapolation. This approach can help also in the definition of acceptance criteria of quantitative parameters. In the context of biological product development, we studied physico-chemical and immunological parameters using different JMP platforms - Graph Builder, Linear Model, Stability - which integrates regulatory constraints. The objective of the presentation is to explain the approach to study stability data, to highlight the different issues and to exhibit how statistical modeling can represent a decision support process. Pharmaceutical Guidelines mentioned in this presentation: US FDA Q1E Evaluation of Stability Data.
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Multivariate Statistical Process Control (2020-EU-EPO-461)
Monday, March 9, 2020
Piet Hoogkamer, Principal Scientist, Abbott Established Pharmaceuticals Sven Daniel Schmitz, TBA, Abbott Established Pharmaceuticals At Abbott, statistical process control (SPC) has been around for some time to better understand, control and improve our various manufacturing processes. Having said that, often this is still equivalent to univariate control charts tracking observations in one dimension. With the quality attributes of our processes being multivariate in nature and the increasing availability of more and different types of data, e.g. spectra, the use of multivariate statistical process control (MSPC) becomes increasingly attractive or even mandatory. Inspired by the EDQM draft chapter (5.28) on MSPC, we will showcase the use of multivariate statistical process control (MSPC) to analyze 2 sets of highly complex and (auto)correlated example data. One, on controlling dissolution behavior via the particle size distribution, the second, on monitoring the state of a process by using all available measurements.
Labels
(6)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
See Fer Yer Sen: The Importance of Data Exploration, a JMP(R) Live Showcase (2020-EU-30MP-458)
Monday, March 9, 2020
Phil Kay, JMP Senior Systems Engineer, SAS People and organizations make expensive mistakes when they fail to explore their data. Decision makers cause untold damage through ignorance of statistical effects when they limit their analysis to simple summary tables. In this presentation you will hear how one charity wasted billions of dollars in this way. You will learn how you can easily avoid these traps by looking at your data from many angles. An example from media reports on "best places to live" will show why you need to look beyond headline results. And how simple visual exploration - interactive maps, trends and bubble plots - gives a richer understanding. All of this will be presented entirely through JMP Public, showcasing the latest capabilities of JMP Live. In September 2017 the New York Times reported that Craven was the happiest area of the UK. Because this is an area that I know very well, I decided to take a look at the data. What I found was much more interesting than the media reports and was a great illustration of the small sample fallacy. This story is all about the value of being able to explore data in many different ways. And how you can explore these interactive analyses and source the data through JMP Public. Hence, "see fer yer sen", which translates from the local Yorkshire dialect as "see for yourself". If you want to find out more about this data exploration, read these two blogs posts: The happy place? Crisis in Craven? An update on the UK happiness survey (view in My Videos) This and more interactive reports used in this presentations can be found here in JMP Public.
Labels
(13)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Blending and Cleanup
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Determining Confidence Limits for Linear Combinations of Variance Components in Mixed Models (2020-EU-EPO-455)
Monday, March 9, 2020
Hadley Myers, JMP Systems Engineer, SAS Chris Gotwalt, JMP Director of Statistical Research and Development, SAS Generating linear models that include random components is essential across many industries, but particularly in the Pharmaceutical and Life Science domains. The Mixed Model platform in JMP Pro allows such models to be defined and evaluated, yielding the contributions to the total variance of the individual model components, as well as their respective confidence intervals. Calculating linear combinations of these variance components is straightforward, but the practicalities of the problem (unequal Degrees of Freedom, non-normal distributions, etc.) prevent the corresponding confidence intervals of these linear combinations from being determined as easily. Previously, JMP Pro users have needed to turn to other analytic software solutions, such as the “Variance Component Analysis” package in R, to resolve this gap in functionality and fulfill this requirement. This presentation is to report on the creation of an add-in, available for use with JMP Pro, that uses parametric bootstrapping to obtain the needed confidence limits. The add-in, Determining Confidence Limits for Linear Combinations of Variance Components in Mixed Models , will be demonstrated, along with the accompanying details of how the technique was used to overcome the difficulties of this problem, as well as the benefit to users for which these calculations are a necessity. (view in My Videos)
Labels
(11)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Measurement Systems Analysis for Curve Data (2020-EU-30MP-447)
Monday, March 9, 2020
Astrid Ruck, Senior Specialist in Statistics, Autoliv Chris Gotwalt, JMP Director of Statistical Research and Development, SAS, SAS Laura Lancaster, JMP Principal Research Statistician Developer, SAS Measurement systems analysis is a measurement process consisting not only of the measurement system, equipment and parts, but also the operators, methods and techniques involved in the entire procedure of conducting the measurements. Automotive industry guidelines such as AIAG [1] or VDA [4] investigate a one-dimensional output per test, but they do not describe how to deal with data curves as output. In this presentation, we take a first step by showing how to perform a gauge repeatability and reproducibility (GR&R) study using force vs. distance output curves. The Functional Data Explorer in JMP Pro is designed to analyze data that are functions such as measurement curves, as those which were used to perform this GR&R study. (view in My Videos)
Labels
(7)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Variance Components Analysis (2020-EU-EPO-438)
Monday, March 9, 2020
Phil Greaves, Staff Scientist, Fujifilm Diosynth Biotechnologies Amy Woodhall, Research Scientist, Fujifilm Diosynth Biotechnologies With cost and timeline pressures for process development there is a drive to use high throughput methods. Recent commercial availability of small scale down system hardware when coupled with design of experiments software enables the potential discovery of the few critical process parameters from the many, in the short timelines required. For process evaluation, early phase development of fermentation recombinant protein processes often uses generic assay methods, but the high throughput methods are not necessarily optimised for accuracy and precision. Thus the overall process analysis is the sum of process variability and that of the measurement system. The estimate of experimental error (noise) will determine the size of response difference (signal) that can be readily detected in the experimental design. JMP software contains process quality analysis tools such as gauge analysis and measurement system analysis that can be used to identify the sources of variation. For expression of an intracellular model protein in E coli the work presented here will show how JMP can carry out variance component analysis on a nested design of the overall process analysis. This information can then be used in the subsequent JMP design of experiments evaluation.
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Blending and Cleanup
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Performance Evaluation and Prediction for Quenching Oils in Production Using JMP® Pro (2020-EU-EPO-436)
Monday, March 9, 2020
Victor Guiller, R&D Engineer Quenching, Forming and Hydraulic, Fuchs Lubrifiant France S.A. Quenching is a very common method in the heat treatment of materials. In this process, a heated workpiece is rapidly cooled within a fluid to obtain certain material properties, i.e. specific hardness of metals. In the production of quenching oils, a time-consuming performance test evaluating the cooling speed and characteristics of the fluid is used to assess if the product meets the specifications. The evaluation and comparison of performances is realized through the analysis of six parameters obtained from the cooling curves. The study’s objectives are to create a mathematical model of the cooling curves with the Functional Data Explorer on JMP® Pro, predicting conformity or non-conformity of the production batches, and being able to classify the different products. The targeted additivation of these fluids has a major impact on their performance, such as the cooling rate. As a result of this study, the creation of the mathematical model leads to a correct product and conformity status identification, and the prediction model enables to plan correction of non-conform batch with the optimal amount of additive. The developed models make a significant contribution to making the fluid development more efficient. (view in My Videos)
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Digital Analytics Boosted With JMP(R) Integration to Google Cloud (2020-EU-45MP-427)
Monday, March 9, 2020
Alfredo López Navarro, Data Lab Manager, Telefónica Arne Ruhkamp, Senior Digital Analyst, Telefónica Step into the world of digital analytics with a hybrid approach. When it comes to statistical analysis, web analytics tools are limited. JMP boosts insights by bringing a flexible platform. It allows us to aggregate, cleanse, explore and interact with different types of data in an agile way. In this showcase we will 1) share with you how business can benefit from cross-functional teams; 2) give a live demo on how to connect to Google Analytics through JMP; 3) merge the web analytics data with other data sources; and 4) generate and deliver the insights.In the way we managed to break silos, we changed our working culture and improved our performance. There is still a long way to go. Our intention is to share all the materials with the Community, data sets, journals and a booklet. Included in the beginning of the video is a Welcome to the Summit greeting by JMP Customer Care Manager, Jeff Perkinson.
Labels
(10)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Data Blending and Cleanup
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Change Point Analysis in JMP(R) (2020-EU-EPO-424)
Monday, March 9, 2020
Ademola Akande, Statistician, Abbott Change is present everywhere; And some level of variation is displayed by all manufacturing and measurement processes. Consequently, having the appropriate tools to accurately identify change points in a process have become very important, particularly with large datasets. Furthermore, the constant focus on process monitoring creates the risk of introducing further changes into our processes by responding to false negatives. A change point analysis tool has therefore been developed using JMP Scripting language (JSL), which can identify multiple change points in a given process and graphically highlight when the change(s) occurred. This tool also provides the user with a confidence level for the likelihood that a change has truly occurred. This work was funded by Abbott Diabetes Care.
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
An Introduction to Structural Equations Models in JMP(R) Pro 15 (2020-EU-45MP-423)
Monday, March 9, 2020
Laura Castro-Schilo, JMP Research Statistician Developer, SAS Abstract: Structural Equations Models (SEM) is a new platform in JMP Pro 15 that offers numerous modeling tools. Confirmatory factor analysis, path analysis, measurement error models and latent growth curve models are just some of the possibilities. In this presentation, we provide a general introduction to SEM by describing what it is, the unique features it offers to analysts and researchers, how it is implemented in JMP Pro 15 and how it is applied in a variety of fields, including market and consumer research, engineering, education, health and others. We use an empirical example – that everyone can relate to – to show how the SEM platform is used to explore relations across variables and test competing theories. Summary: The video below shows how to fit models consistent with each of the "Emotion Theories" in the presentation. Together with the attached slides, users can be guided on how to use the SEM platform. Here are the takeaway points: We have 3 a-priori theories of how our variables relate to each other We fit models in SEM that map onto each of the theories We look for the most appropriate model by: Examining individual model fit. In this example we used the chi-square, a measure of misfit --we want it to be small with respect to the degrees of freedom (df) and non-significant, but the CFI and RMSEA can be used too and are better with large sample sizes. We also used the normalized residuals heatmap; we want those residuals within +/- 2 units. Comparing fit across models. The AICc weights help us with this task. When the models are nested (i.e., one is entirely contained within the other --in our example, the model for Theory #1 is nested within that for Theory #2), we can take the difference between the chi-squares and the df to obtain a delta-chi-square and delta-df that can be tested for significance with a chi-square distribution. (view in My Videos)
Labels
(8)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
SIFTomics and Data Analytics: The Quickest Way to Unravel the Totality of Your Chemical Space (2020-EU-30MP-414)
Monday, March 9, 2020
Camilla Liscio, Senior Application Chemist, Anatune Jamie Minaeian, Application Chemist, Anatune In a world of increasing complexity, analytical chemists must unravel the entirety of the chemical space of products and materials. On this never-ending quest from complexity to clarity, data analytics becomes an essential tool. VOCs are known to impart an odor to products. The traditional approach to quantifying odor uses a sensory panel, which is expensive and can be subject to problems brought about by fatigue. Selected Ion Flow Tube Mass Spectrometry (SIFT-MS), however, can selectively detect and quantify a wide range of odor compounds in real time, more cost-effectively. The challenge is how to make sense of the rich data set generated by fast SIFT-MS analysis.This is where JMP machine learning and multivariate analytics brings clarity by enabling extraction and understanding of the most important chemical insight. This talk will demonstrate the synergic power of SIFT-MS analysis combined with chemometrics to characterize the chemical space of odor compounds in a real application scenario. (view in My Videos)
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
JMP(R) Pro: A Valuable Partner in the Journey From Laboratory (Valley) to Production (Top) (2020-EU-EPO-412)
Monday, March 9, 2020
Simon Stelzig, Head of Product Intelligence, Lohmann JMP, later JMP Pro was used to guide the development of a novel structural adhesive tape from initial experiments towards an optimized product ready for sale. The basis was a 7 component mixture design created by JMP’s custom design function. Unluckily, almost 40% of the runs could be formulated but not processed. Even with this crippled design predictions of processible optima for changing customer requests were possible using a new response and JMP’s model platform. A necessary augmentation of the DoE using the augment design function continuously increased the number of experiments, enabling fine-tuning of the model and finally the prediction of a functioning prototype tape and product. Switching from JMP to JMP Pro, within a follow-up project based on the original experiments, modelling became drastically more efficient and reliable using its better protection against poor modelling, as encountered not using the Pro version. The increasing number of runs and the capabilities from JMP Pro opened the way from classical DoE analysis towards the use of Machine Learning methods. This way, development speed has been increased even further, almost down to prediction and verification, in order to fulfill customer requests, falling in the vicinity of our formulation. (view in My Videos) Editor's note: The presentation that @shs references at the beginning of his presentation is Using REST API Through HTTP Request and JMP Maps to Understand German Brewery Density (2020-EU-EPO-388)
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Simple Process Monitoring of Multiple Parameters Using JMP(R) (2020-EU-30MP-404)
Monday, March 9, 2020
Torsten Weber, Engineer Process Integration, Heliatek The paper will discuss the application of JMP at Heliatek in terms of process monitoring in a pilot production environment. In a running production it is a challenge to keep track of multiple control charts simultaneously. The contribution describes an alarm dashboard compiled in JSL, which monitors multiple process parameters and visualizes critical limit exceedings in a simple way. The application counts the limit violations in a defined time frame of multiple components and depicts the results in a heat map. Database queries continuously update the data. This presentation will explain in detail: How to run this application in a loop to react as soon as possible to critical process variations. How to solve issues regarding automatic alarm emails without using the „mail()“ or „alarm script()“ function of JSL How to share this alarm board in a network via customized HTML report to minimize interrupting „hangover“ while updating the data. This application helps to enhance the production yield of organic-PV products by a simple visualization of multiple process parameters and therefore shortens the response time. (view in My Videos)
Labels
(14)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Data Blending and Cleanup
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
To Drink or Not to Drink? That Is the Question. Analyzing Alcohol Data With JMP(R) 15 (2020-EU-EPO45M-395)
Monday, March 9, 2020
Mandy Chambers, JMP Principal Test Engineer, SAS Melanie Drake, Principal Systems Developer, SAS/JMP Did you know that 30% of Americans are teetotalers, while another 24 million consume on average about 10 drinks per day? In 2017 alone, alcohol sales in the United States amounted to approximately 234.4 billion dollars. For this presentation, we narrowed our scope to explore alcohol sales data in North Carolina. Our objective was to help plan marketing strategies for increasing vodka sales for a local distillery by analyzing sales patterns in different counties. We began with using the new PDF import in JMP 15 to pull data from the web, as well as taking advantage of the data table features to enhance visualizations, clean up the data, and prepare it for our analysis. New features such as header graphs in the data table help determine which columns to use in evaluations. JMP allows comparative trends, fit analysis and forecasting to gather possible future sales scenarios, and JMP Live publishes these reports for everyone to view. When we are finished, not only will you know interesting statistics about North Carolinians concerning the amount they drink or the types of alcohol they consume, but you will also be able to use new JMP features. You will learn how painless JMP 15 makes it to analyze your data more effectively, create graphical visualizations attractive to the eye, and share your results more easily.
Labels
(13)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Data Blending and Cleanup
Data Exploration and Visualization
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Making Your Forecast Faster and Easier: Introducing New Time Series Forecast Platform in JMP(R) 15 (2020-EU-45MP-390)
Monday, March 9, 2020
Peng Liu, PhD, JMP Principal Research Statistician Developer, SAS Jian Cao, PhD, JMP Principal Systems Engineer, SAS A new Time Series Forecast platform has been developed to enable users to build forecasting models for multiple time series easily and fast. In comparison to the conventional univariate time series forecasting models, this new platform streamlines the model building process with minimal user input and makes forecasts for hundreds or thousands of time series in a single run. This is achieved by employing flexible modern smoothing methods and a well-accepted model selection procedure. Research has shown this approach's forecasting accuracy is compelling. In this paper we will discuss the statistical details behind the new forecast platform and demonstrate real-life examples to show how the implementation achieves computational efficiency.
Labels
(7)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Accounting for Primacy and Recency Effects in Discrete Choice Experiments (2020-EU-EPO-387)
Monday, March 9, 2020
Roselinde Kessels, Assistant Professor, University of Antwerp and Maastricht University Robert Mee, William and Sara Clark Professor of Business Analytics, University of Tennessee Past discrete choice experiments provide clear evidence of primacy and recency effects in the presentation order of the profiles within a choice set, with the first or last profiles in a choice set being selected more often than the other profiles. Existing Bayesian choice design algorithms do not accommodate profile order effects within choice sets. This can produce severely biased part-worth estimates, as we illustrate using a product packaging choice experiment performed for P&G in Mexico. A common practice is to randomize the order of profiles within choice sets for each respondent. While randomizing profile orders for each subject ensures near balance on average across all subjects, the randomizations for many individual subjects can be quite unbalanced with respect to profile order; hence, any tendency to prefer the first or last profiles may result in bias for those subjects. As a consequence, this bias may produce heterogeneity in hierarchical Bayesian estimates for subjects, even when the subjects have identical true preferences. As a design solution, we propose position balanced Bayesian optimal designs that are constrained to achieve sufficient order balance. For the analysis, we recommend including a profile order covariate to account for any order preference in the responses. (view in My Videos)
Labels
(8)
Labels:
Labels:
Advanced Statistical Modeling
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
NXP Structured Problem-Solving Approach: Two Case Studies in Semiconductor Test and Design for Automotive (2020-EU-EPO-384)
Monday, March 9, 2020
Corinne Bergès, Six Sigma Black Belt, NXP Semiconductors Kurt Neugebauer, Analog Design Engineer, NXP Semiconductors Da Dai, Design Automation Engineer, NXP Semiconductors Martin Kunstmann, R&D-SUP-Working student, NXP Semiconductors Alain Beaudet, Product and Test Engineer, NXP Semiconductors Structured Problem Solving (SPS) is one of the three pillars of NXP Six Sigma system, with Quality Culture and Continuous Improvement, and demonstrates still more NXP Quality system maturity. Some key approaches in NXP SPS are fitting with the DMAIC/DMADV, 8D or 5-Why frameworks. They widely use statistics to change assumptions into evidences, necessary for a real defect root cause elimination: modeling, DOE, multivariate analysis, …Two specific statistical analysis are described. In design for automotive, about simulation of parametric, hard or soft defects, purpose is to implement the best algorithm to reduce number of simulations, without impacting test coverage or failure rate estimation precision: for this, JMP provides interesting options in clustering. NXP experiments will result in an algorithm and in some recommendations for the new IEEE standard on study about defect coverage accounting method. Now, downstream in manufacturing, when it deals with capability index computation, and with normality test, to bypass high sensitivity of these tests for a slight abnormality, a methodology was designed in JMP to quantify shift from normality, by using the Shash distribution and its Kurtosis and Skewness parameters. A script was implemented to automate it on the more than 3000 tests for an automotive product. (view in My Videos)
Labels
(10)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
How JMP(R) Can Reduce Unutilized Engineering Talent and Save Money (2020-EU-EPO-375)
Monday, March 9, 2020
Fabrizio Ruo Redda, R&D Senior Manager, Vishay Semiconductor Italiana Semiconductor devices development requires an extensive work of testing and data analysis. Electrical characterization laboratory provides a large amount of data to development engineers, coming every time in different formats according to specific products under test. Usually development engineers spend a lot of time putting together the data, while a limited fraction of time is dedicated to value added activities like analyzing statistically the data and drawing sounding conclusions. In the frame of a lean six sigma project, it was possible to show the economic advantage achievable by eliminating the tedious data preparation or better “data moving” process. JMP scripting capability is used to manage complex data files from different testers so that they can be easily uploaded in a SQL database. Development engineers can now use JMP to download data directly from SQL in a format ready for analysis. The typical engineering analysis time has been reduced by more than 85%, but more important, it is now only dedicated to value added analysis without any intellectual waste. Furthermore, the quality of the analysis and conclusion is now improved, considering also the possibility to make quick comparison among products or with previously collected data.
Labels
(10)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Access
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Data Exploration and Discovery in Multi-isotope Imaging Mass Spectrometry (MIMS) in Cancer Research (2020-EU-30MP-371)
Monday, March 9, 2020
Greg McMahon, Principal Research Scientist, National Physical Laboratory Multi-isotope imaging mass spectrometry (MIMS) combines stable isotope labeling of biological samples with high spatial resolution (sub-cellular) mass spectrometry imaging and extensive statistical analysis of the resultant image data. The images are rich in information, and use of JMP allows a quick and easy method of analyzing the data for information that is either just subtly contained within the image, or other information that may be below the first "obvious" layer of information. Combining Graph Builder with simple data distributions and local data filters provides a wealth of information. The approach can be extended by application of cluster analysis and multivariate statistics. In this presentation, we will use an example tracking the metabolic fate of 13C and 18O stable isotope labeled glucose in mouse breast cancer tumors engineered to contain cells with either high levels of the Myc oncogene, which is a driver for aggressive breast cancer growth, or low levels of the Myc oncogene. We will finish with a few comments about the significance of the results in terms of cancer research for the non-expert.
Labels
(10)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Exploration and Visualization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
The Role of Perception in Statistics-Based Decisions (2020-EU-45MP-367)
Monday, March 9, 2020
Bryan Fricke, JMP Principal Software Developer, SAS JMP is a powerful tool for generating statistical reports for evaluation by decision makers. However, when it comes to preparing reports, accuracy and comprehensibility are only part of the story. For example, psychologists Amos Tversky and Daniel Kahneman have suggested that presenting results in terms of a potential loss can have about twice the psychological impact as an equivalent gain. In this session, we will explore the role perception plays in statistics-based decisions and how knowledge of that role should inform JMP users with respect to generating reports for decision makers.
Labels
(9)
Labels:
Labels:
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Modeling a New Equipment Behavior (2020-EU-EPO-360)
Monday, March 9, 2020
Vincent DE SCHUYTENEER, Data Engineer, Lynred After receiving a new equipment for process, we adopt a quality method to qualify and industrialize it. First of all we realized a first DOE, and then to deep results by 2 additionnal DOE. This allowed us to understand and modelize behavior of equipment on a large window of process. As we were able to catch log data, we developped a JMP script to integrate them into a JMP table. We used Functional Data Exploration to analyze these data, that we combined with hierarchical clustering. Thanks to data and methods, we identified 2 new parameters that were infulencing product performance. Finally this complete methodological approach with JMP tools helped us to increase quickly, significantly and deep our knowledge about the equipment behavior. We have now a complete model for this equipment that will be very helpful in the future during its production life. (view in My Videos)
Labels
(12)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Exploration and Visualization
Design of Experiments
Mass Customization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
0 attendees
0
0
Using Simulation Methods in JMP® to Prevent Supply Chain Fires (2020-EU-30MP-356)
Monday, March 9, 2020
Stephen Pearson, Specialist Data Scientist, Syngenta Many powdered materials slowly oxidize with time which generates heat. If in a bulk form (such as during transport or storage) then heat generation can exceed heat loss, leading to ignition. Climate control and limiting packing amounts can reduce the risk, but this increases the costs for the consumer through reduced logistical options, larger shipping volumes and disposal of additional packaging. Laboratory tests are well established to determine a safe packing size. However, they are costly, especially for new products where limited amounts of material are available. The physics of the oxidation process can be simulated provided all the material properties are known. Using JMP® we will demonstrate how to combine these two approaches to reduce the amount of thermal stability testing required: 1) generate a constrained spacing-filling experimental design; 2) control the simulation software (COMSOL Multiphysics®) via JSL; 3) build meta-models; 4) simulate the outcome for new materials. By obtaining estimates of different material properties with each test, the prediction uncertainty can be updated to suggest the range of suitable packaging given the available data. This enables a data-driven approach to the selection of laboratory tests. (view in My Videos)
Labels
(10)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Content Organization
Data Blending and Cleanup
Design of Experiments
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
0 attendees
0
0
0 attendees
0
0
Calculating Nonparametric Tolerance Intervals for Small Sample Sizes (2020-EU-30MP-343)
Monday, March 9, 2020
Oliver Thunich, Consultant, STATCON GmbH In industry production, tolerance intervals are widely used to determine the quality of a process.The commonly used tolerance intervals, however, assume normal distribution of the data, which is problematic for many processes. Following a request of a client, we came up with a possibility to calculate nonparametric tolerance intervals by calculating confidence intervals for quantiles using the nonparametric empirical likelihood approach implemented in JMP. As the desired sample sizes become very small, the traditional nonparametric confidence intervals tend to return unstable results. Therefore, we developed a JSL script that extends the empirical likelihood method and is able to generate stable, nonparametric tolerance intervals for a large proportion of the population even with small samples. A simulation study evaluates the performance of the approach in comparison to existing methods using production data as well as survival analysis data. We found that the proposed method is much more stable than existing methods, especially when the data heavily differs from a normal distribution. Using JMP in combination with the implemented method, we are able to assure quality of processes where measuring quality is very costly and/or time consuming.
Labels
(9)
Labels:
Labels:
Advanced Statistical Modeling
Automation and Scripting
Basic Data Analysis and Modeling
Consumer and Market Research
Data Exploration and Visualization
Predictive Modeling and Machine Learning
Quality and Process Engineering
Reliability Analysis
Sharing and Communicating Results
0 attendees
0
0
«
Previous
1
2
3
…
18
Next
»