Choose Language Hide Translation Bar

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Brian Corcoran, JMP Director of Research and Development, SAS Dieter Pisot, JMP Principal Application Developer, SAS Eric Hill, JMP Distinguished Software Developer, SAS   You know the value of sharing insights as they emerge. JMP Live — the newest member of the JMP product family — reconceptualizes sharing by taking the robust statistics and visualizations in JMP and extending them to the web, privately and securely. If you'd like a more iterative, dynamic and inclusive path to showing your data and making discoveries, join us. We'll answer the following questions: What is JMP Live? How do I use it? How do I manage it? For background information on the product, see this video from Discovery Summit Tucson 2019 and the JMP Live product page.   JMP Live Overview (for Users and Managers) – Eric Hill What is JMP Live? Why use JMP Live? Interactive publish and replace What happens behind the scenes when you publish Groups - From a user perspective Scripted publishing Stored credentials API Key Replacing reports Setup and Maintenance (for JMP Live Administrators) – Dieter Pisot Administering users and groups Limiting publishing Setting up JMP Live Windows services .env files Upgrading Applying a new license Using Keycloak single sign-on Installing and Setting up the Server (for IT Administrators) – Brian Corcoran Choosing architectural configurations based on expected usage Understanding SSL Certificates and their importance Installing the JMP Live Database component Installing the JMP Pro and JMP Live components on a separate server Connecting JMP Live to the database Testing installed configuration to make sure it is working properly (view in My Videos)     (view in My Videos)   (view in My Videos)
Dieter Pisot, JMP Principal Systems Engineer, SAS Stan Koprowski, JMP Senior Systems Engineer, SAS   Data changes, and so do your JMP Live reports. Typical data changes that involve additional observations or modifications to columns of data necessitate updates to published reports. With the first scenario, an existing report might need to be recalculated to reflect the new observations or rows of data that are used in the report. A second option is when you want to restructure the underlying data by adding or removing columns of information that are used in the report. With both situations you must update your report on a regular basis. In this paper we will provide practical examples of how to organize JSL scripts that facilitate the replacement of an existing JMP Live report with a current version. Prior to the live demonstration, we will discuss all key security protocols, including protecting credentials needed to connect to JMP Live. The code presented are designed to be reused and shared with anyone who has a need to publish or replace a JMP Live report on a predefined time interval, such as hourly, daily, weekly or monthly. With some basic JSL knowledge you can easily adopt them for your automated updates to any of your other existing JMP Live reports. Not a coder? No worries, we've got your back. Additionally, we will provide a JMP add-in that schedules the publishing of a new report or the publishing of a replacement report for those with little JSL knowledge using a wizard-based approached.
Zhiwu Liang, Principal Scientist, Procter & Gamble Pablo Moreno Pelaez, Group Scientist, Procter & Gamble   Car detailing is a tough job. Transforming a car from a muddy, rusty, full of pet fur box-on-wheels into a like-new clean and shiny ride takes a lot of time, specialized products and a skilled detailer. But…what does the customer really appreciate on such a detailed car cleaning and restoring job? Are shiny rims most important for satisfaction? Interior smell? A shiny waxed hood? It is critical for a car detailer business to know the answers to these questions to optimize the time spent per car, the products used, and the level of detailing needed at each point of the process. With the objective of maximizing customer satisfaction and optimizing the resources used, we designed a multi-stage customer design of experiments. We identified the key vectors of satisfaction (or failure), defined the levels for those and approached the actual customer testing in adaptive phases, augmenting the design in each of them. This poster will take you through the thinking, designs, iterations and results of this project. What makes customers come back to their car detailer? Come see the poster and find out! (view in My Videos) View more... (Highlight to read)   Speaker Transcript Zhiwu Liang Hello, everyone. I'm Zhiwu Liang statistician for Brussels Innovation Center for Procter and Gamble company I'm I'm working   For the r&d department. Hello. Pablo Moreno Pelaez Yep. So I'm Pablo Moreno Pelaez I'm working right now in Singapore in the r&d   Department for Procter and Gamble's   So we wanted to introduce to you this poster where we want to share a case study in which we wanted to figure out what makes a car detailing your grade.   So as you know, Procter and Gamble, the very famous company about job detailing for cars. No, just a joke. So we had to anonymize or what have they done. So this is the way   We wanted to share this case study, putting it in the context of a car detailing job and what we wanted to figure out here is what were the key customer satisfaction factors for which we then   Build interactive design that we then tested with some of those customers to figure out how to build the model and how to optimize   Job detailing for the car. So how do we minimize the use of some of our ingredients. How do we minimize the time we take for some of the tasks that it takes to do the job details.   So if you go to the next slide. And the first thing that we went to, to take a look. Yes.   Okay, what are the different vectors that a customer we look at when they they take the car to get detail and to get clean and shiny and go back home with a buddy.   A brand new car. What are they looking at clean attributes, they're looking at Shane attributes and they are looking at the freshness of the guy.   From a culinary view that we looked at the exterior cleaning the cleaning of the rooms are the king of the interior   The shine of the overall body, the rooms that windows and of course the overall freshness of the interior   And then we'll wanted to build this by modifying these attributes in different ways and combining the different finishes that it a potential   Car detailing job would give you wanted to estimate and be able to build the model to calculate what the overall satisfaction.   And also what the satisfaction with a cleaning and what their satisfaction with the shine.   Would be modifying those different vectors. These will allow us in the future to use the model.   To estimate. Okay, can we reduce the time that we spend on the rooms, because it's not important, or can we reduce the time that we spend on the interior or reduce the amount of products that we use for freshness. If those are not important.   So really, to then optimize how do we spend the resources on delivering the the car detailing jobs.   So in the next slide.   You can see a little bit with the faces of the study where Zhiwu Liang Yeah, so as popular as sad as the cart. The planning job company. We are very focused on the consumer satisfaction. So for this particular job.   What we have to do is identify what is the key factors which drive the consumer overall satisfaction and clean and shine satisfaction. So in order to do that we separate or our study design and   Data collection experiments industry step. First, we do the Pilar, which is designed to five different scenario. Okay, using the fire cars.   To set up the different level of each offer factors as a moment. We said, have to all of these five factor previous public described in the to level one is low and not as high.   Then we recruit the 20 consumers to evaluate all of the five cards in a different order. The main objective for this Pilar is check the methodology and track the   If the question we asked consumers consumers understand and provide the correct answer, and also define the proper range of each factor.   So after that, we go to the phase one, which is the extend our design space by seven factors. Okay. Some factors to keep, low, high level as we do the Pilar. Some extent to the low, medium, high because we think is more relevant to the consumer by including the more level in our factor.   And since we got more factor and from the customer design point of view, you will generate more   Experiments runs in our study so totally we have an it runs of the cost setting and each of the panelists. We ask them to   Evaluate still five but using the different order or different combination therefore accepted the custom design. When the consumer need to evaluate   Five out of the 90 I said him. We have to using the balance in company blog design technique and to use 120 customers, each of them evaluate five cars.   So totally this   120 customer data we collect we run the model identify what is the main effect. Okay, and what is the interaction in our model.   Then through that we three hours. Not important factor and go to the face to face to using the funnel identified the six factors and for course adding more level for some factor because we saying that low is not low enough in the faith from phase one study and middle it's not   Really matched to our consumer satisfaction. So we had some level of quality lol some factor level for the middle, high   Inserting currently design space, then   The face to design experiments his argument from the phase one.   Was that we get a different okay setting for the 90 different cars, then asked 120 consumer evaluate five in a different camp one   Through that we can remove non we can identify okay what is, what is the past.   Factor setting which have the optimal solution for the consumer satisfaction and clean and shine satisfaction. So as you can see here   We run the 3D model using our   Six factors setting.   Which each of them has played some role for the consumer satisfaction intense or cleaning as shine satisfaction.   For the overall. Clearly, we can see the cleaning ring and shine window cleaning in interior is the key driver for the overall satisfaction. So if consumers in the ring clean and window shine. Normally, either we all agree, he was satisfied for the   For our car detailing job and also we identified significant interaction.   Exterior clean and intuitive clean these two things combined together has different rate to the overall satisfaction with a clean satisfaction and the shine satisfaction model.   We identified very close, very significantly impact factors for clean   Clearly, all of the clean factor relate to the clean satisfaction and for shine also all of the shine one relate to the shine satisfaction.   But still the different perspective lighter clean his focus on the ring and shine is focused on the window. So from validating, we can have the better setting for all the car.   Relief factor which helpers to divide the new projects which achieved the best consumer satisfaction based on all of the   Factors setting. I think   Speaker Transcript Zhiwu Liang Hello, everyone. I'm Zhiwu Liang statistician for Brussels Innovation Center for Procter and Gamble company I'm I'm working   For the r&d department. Hello. Pablo Moreno Pelaez Yep. So I'm Pablo Moreno Pelaez I'm working right now in Singapore in the r&d   Department for Procter and Gamble's   So we wanted to introduce to you this poster where we want to share a case study in which we wanted to figure out what makes a car detailing your grade.   So as you know, Procter and Gamble, the very famous company about job detailing for cars. No, just a joke. So we had to anonymize or what have they done. So this is the way   We wanted to share this case study, putting it in the context of a car detailing job and what we wanted to figure out here is what were the key customer satisfaction factors for which we then   Build interactive design that we then tested with some of those customers to figure out how to build the model and how to optimize   Job detailing for the car. So how do we minimize the use of some of our ingredients. How do we minimize the time we take for some of the tasks that it takes to do the job details.   So if you go to the next slide. And the first thing that we went to, to take a look. Yes.   Okay, what are the different vectors that a customer we look at when they they take the car to get detail and to get clean and shiny and go back home with a buddy.   A brand new car. What are they looking at clean attributes, they're looking at Shane attributes and they are looking at the freshness of the guy.   From a culinary view that we looked at the exterior cleaning the cleaning of the rooms are the king of the interior   The shine of the overall body, the rooms that windows and of course the overall freshness of the interior   And then we'll wanted to build this by modifying these attributes in different ways and combining the different finishes that it a potential   Car detailing job would give you wanted to estimate and be able to build the model to calculate what the overall satisfaction.   And also what the satisfaction with a cleaning and what their satisfaction with the shine.   Would be modifying those different vectors. These will allow us in the future to use the model.   To estimate. Okay, can we reduce the time that we spend on the rooms, because it's not important, or can we reduce the time that we spend on the interior or reduce the amount of products that we use for freshness. If those are not important.   So really, to then optimize how do we spend the resources on delivering the the car detailing jobs.   So in the next slide.   You can see a little bit with the faces of the study where Zhiwu Liang Yeah, so as popular as sad as the cart. The planning job company. We are very focused on the consumer satisfaction. So for this particular job.   What we have to do is identify what is the key factors which drive the consumer overall satisfaction and clean and shine satisfaction. So in order to do that we separate or our study design and   Data collection experiments industry step. First, we do the Pilar, which is designed to five different scenario. Okay, using the fire cars.   To set up the different level of each offer factors as a moment. We said, have to all of these five factor previous public described in the to level one is low and not as high.   Then we recruit the 20 consumers to evaluate all of the five cards in a different order. The main objective for this Pilar is check the methodology and track the   If the question we asked consumers consumers understand and provide the correct answer, and also define the proper range of each factor.   So after that, we go to the phase one, which is the extend our design space by seven factors. Okay. Some factors to keep, low, high level as we do the Pilar. Some extent to the low, medium, high because we think is more relevant to the consumer by including the more level in our factor.   And since we got more factor and from the customer design point of view, you will generate more   Experiments runs in our study so totally we have an it runs of the cost setting and each of the panelists. We ask them to   Evaluate still five but using the different order or different combination therefore accepted the custom design. When the consumer need to evaluate   Five out of the 90 I said him. We have to using the balance in company blog design technique and to use 120 customers, each of them evaluate five cars.   So totally this   120 customer data we collect we run the model identify what is the main effect. Okay, and what is the interaction in our model.   Then through that we three hours. Not important factor and go to the face to face to using the funnel identified the six factors and for course adding more level for some factor because we saying that low is not low enough in the faith from phase one study and middle it's not   Really matched to our consumer satisfaction. So we had some level of quality lol some factor level for the middle, high   Inserting currently design space, then   The face to design experiments his argument from the phase one.   Was that we get a different okay setting for the 90 different cars, then asked 120 consumer evaluate five in a different camp one   Through that we can remove non we can identify okay what is, what is the past.   Factor setting which have the optimal solution for the consumer satisfaction and clean and shine satisfaction. So as you can see here   We run the 3D model using our   Six factors setting.   Which each of them has played some role for the consumer satisfaction intense or cleaning as shine satisfaction.   For the overall. Clearly, we can see the cleaning ring and shine window cleaning in interior is the key driver for the overall satisfaction. So if consumers in the ring clean and window shine. Normally, either we all agree, he was satisfied for the   For our car detailing job and also we identified significant interaction.   Exterior clean and intuitive clean these two things combined together has different rate to the overall satisfaction with a clean satisfaction and the shine satisfaction model.   We identified very close, very significantly impact factors for clean   Clearly, all of the clean factor relate to the clean satisfaction and for shine also all of the shine one relate to the shine satisfaction.   But still the different perspective lighter clean his focus on the ring and shine is focused on the window. So from validating, we can have the better setting for all the car.   Relief factor which helpers to divide the new projects which achieved the best consumer satisfaction based on all of the   Factors setting. I think
Phil Kay, JMP Senior Systems Engineer, SAS   People and organizations make expensive mistakes when they fail to explore their data. Decision makers cause untold damage through ignorance of statistical effects when they limit their analysis to simple summary tables. In this presentation you will hear how one charity wasted billions of dollars in this way. You will learn how you can easily avoid these traps by looking at your data from many angles. An example from media reports on "best places to live" will show why you need to look beyond headline results. And how simple visual exploration - interactive maps, trends and bubble plots - gives a richer understanding. All of this will be presented entirely through JMP Public, showcasing the latest capabilities of JMP Live.   In September 2017 the New York Times reported that Craven was the happiest area of the UK. Because this is an area that I know very well, I decided to take a look at the data. What I found was much more interesting than the media reports and was a great illustration of the small sample fallacy.   This story is all about the value of being able to explore data in many different ways. And how you can explore these interactive analyses and source the data through JMP Public. Hence, "see fer yer sen", which translates from the local Yorkshire dialect as "see for yourself".   If you want to find out more about this data exploration, read these two blogs posts: The happy place?  Crisis in Craven? An update on the UK happiness survey    (view in My Videos)     This and more interactive reports used in this presentations can be found here in JMP Public.
Hadley Myers, JMP Systems Engineer, SAS Chris Gotwalt, JMP Director of Statistical Research and Development, SAS   Generating linear models that include random components is essential across many industries, but particularly in the Pharmaceutical and Life Science domains.  The Mixed Model platform in JMP Pro allows such models to be defined and evaluated, yielding the contributions to the total variance of the individual model components, as well as their respective confidence intervals.  Calculating linear combinations of these variance components is straightforward, but the practicalities of the problem (unequal Degrees of Freedom, non-normal distributions, etc.)  prevent the corresponding confidence intervals of these linear combinations from being determined as easily.  Previously, JMP Pro users have needed to turn to other analytic software solutions, such as the “Variance Component Analysis” package in R, to resolve this gap in functionality and fulfill this requirement.  This presentation is to report on the creation of an add-in, available for use with JMP Pro, that uses parametric bootstrapping to obtain the needed confidence limits.  The add-in, Determining Confidence Limits for Linear Combinations of Variance Components in Mixed Models  , will be demonstrated, along with the accompanying details of how the technique was used to overcome the difficulties of this problem, as well as the benefit to users for which these calculations are a necessity. (view in My Videos)  
Simon Stelzig, Head of Product Intelligence, Lohmann   JMP, later JMP Pro was used to guide the development of a novel structural adhesive tape from initial experiments towards an optimized product ready for sale. The basis was a 7 component mixture design created by JMP’s custom design function. Unluckily, almost 40% of the runs could be formulated but not processed. Even with this crippled design predictions of processible optima for changing customer requests were possible using a new response and JMP’s model platform. A necessary augmentation of the DoE using the augment design function continuously increased the number of experiments, enabling fine-tuning of the model and finally the prediction of a functioning prototype tape and product. Switching from JMP to JMP Pro, within a follow-up project based on the original experiments, modelling became drastically more efficient and reliable using its better protection against poor modelling, as encountered not using the Pro version. The increasing number of runs and the capabilities from JMP Pro opened the way from classical DoE analysis towards the use of Machine Learning methods. This way, development speed has been increased even further, almost down to prediction and verification, in order to fulfill customer requests, falling in the vicinity of our formulation. (view in My Videos) Editor's note: The presentation that @shs references at the beginning of his presentation is Using REST API Through HTTP Request and JMP Maps to Understand German Brewery Density (2020-EU-EPO-388) 
Roselinde Kessels, Assistant Professor, University of Antwerp and Maastricht University Robert Mee, William and Sara Clark Professor of Business Analytics, University of Tennessee   Past discrete choice experiments provide clear evidence of primacy and recency effects in the presentation order of the profiles within a choice set, with the first or last profiles in a choice set being selected more often than the other profiles. Existing Bayesian choice design algorithms do not accommodate profile order effects within choice sets. This can produce severely biased part-worth estimates, as we illustrate using a product packaging choice experiment performed for P&G in Mexico. A common practice is to randomize the order of profiles within choice sets for each respondent. While randomizing profile orders for each subject ensures near balance on average across all subjects, the randomizations for many individual subjects can be quite unbalanced with respect to profile order; hence, any tendency to prefer the first or last profiles may result in bias for those subjects. As a consequence, this bias may produce heterogeneity in hierarchical Bayesian estimates for subjects, even when the subjects have identical true preferences. As a design solution, we propose position balanced Bayesian optimal designs that are constrained to achieve sufficient order balance. For the analysis, we recommend including a profile order covariate to account for any order preference in the responses. (view in My Videos)  
Laura Lancaster, JMP Principal Research Statistician Developer, SAS Jianfeng Ding, JMP Senior Research Statistician Developer, SAS Annie Zangi, JMP Senior Research Statistician Developer, SAS   JMP has several new quality platforms and features – modernized process capability in Distribution, CUSUM Control Chart and Model Driven Multivariate Control Chart – that make quality analysis easier and more effective than ever. The long-standing Distribution platform has been updated for JMP 15 with a more modern and feature-rich process capability report that now matches the capability reports in Process Capability and Control Chart Builder. We will demonstrate how the new process capability features in Distribution make capability analysis easier with an integrated process improvement approach. The CUSUM Control Chart platform was designed to help users detect small shifts in their process over time, such as gradual drift, where Shewhart charts can be less effective. We will demonstrate how to use the CUSUM Control Chart platform and use average run length to assess the chart performance. The Model Driven Multivariate Control Chart (MDMCC) platform, new in JMP 15, was designed for users who monitor large amounts of highly correlated process variables. We will demonstrate how MDMCC can be used in conjunction with the PCA and PLS platforms to monitor multivariate process variation over time, give advanced warnings of process shifts and suggest probable causes of process changes.
Carlos Ortega, Project Leader, Avantium Daria Otyuskaya, Project Leader, Avantium Hendrik Dathe, Services Director, Avantium   Creativity is at the center of any research and development program. Whether it is a fundamental research topic or the development of new applications, the basis of solid research rests on robust data that you can trust. Within Avantium, we focus on executing tailored catalysis R&D projects, which vary from customer to customer. This requires a flexible solution to judge the large amount of data that is obtained in our up to 64 reactor high-throughput catalyst testing equipment. We use JMP and JLS scripts to improve the data workflow and its integration. In any given project, the data is generated in different sources, including our proprietary catalyst testing equipment — Flowrence ® —, on-line and off-line analytical equipment (e.g., GC, S&N analyzers and SimDis) or manual data records (e.g., MS Excel files). The data from these sources are automatically checked by our JSL scripts and with the statistical methods available in JMP we are able to calculate key performance parameters, elaborate key performance plots and generate automatic reports that can be shared directly with the clients. The use of scripts guarantees that the data handling process is consistent, as every data set in a given project is treated the same way. This provides seamless integration of results and reports, which are ready-to-share on a software platform known to our customers.     Auto-generated transcript...   Speaker Transcript Carlos Ortega Yeah. Hi, and welcome to our presentation at the JMP Discovery Summit. Of course, we would have liked to give this presentation in person, but under the current circumstances, this is the best way we can still share the way we are using JMP in our day-to-day work and how it helps us actually on the more day-to-day work, how to rework our data. However, the presentation in this way with the video has also an advantage for you as a viewer because yeah if you want to grab a coffee right now you just can hit pause and continue when the coffee is ready. But looking at the time, I guess the summit is right now well under way. And most likely, you heard already quite some exciting presentations. How JMP can help you to make more sense out of the data to solve them a statistical tools to gain deeper insight and dive into more parts of your data. However, what we want to do today (and this is also hidden under the title about the data quality assurance), the scripting engine. Everything, which has to do with JSL scripting because we help...this helps us a lot on our day-to-day work to prepare the data, which are then ready to be used for data analysis and by we I mean Carlos Ortega, Daria Otyuskaya, and myself, which I now want to introduce a bit because, yeah, that's the to get a bit better feeling on who's doing this. But of course, as usual, there are some some rules to this, which are the disclaimer about the data we are using. And now if you're a lawyer for sure you're going to press pause to study this in detail. However, for all other people right now, let's dive into the presentation. And of course nothing better than to start with a short introduction of the people you see, you see already the location. We all have in common, which is Amsterdam in the Netherlands and we all have in common that we work at Avantium. company provider for sustainable technologies. However, the different locations we are coming from is all over the world. We have, on the one hand side on the left side, Carlos Ortega, a chemical engineer from Venezuela, which lives in Holland, about six years and works at Avantium about two years as a project leader and services. Then we have on the right side Daria Otyuskaya from Russia also working here for about two years and spending the last five years in the Benelux area where she made her PhD in chemical engineering. And myself. I have the only advantage, can that I can travel home by car as I origin from Germany. I live in Holland since about 10 years and join Avantium about three years ago. But now, let's talk a bit more about Avantium. I just want to briefly lay out a bit of the things we are doing. Avantium, as I mentioned before, provider for sustainable technologies and has three business units. One is Avantium Renewable Polymers, where we actually develop biodegradable polymer called a PEF, which is hundred percent plant based and recyclable. Second, we have a business unit called Avantium Renewable Chemistries, which offers renewable technologies to produce chemicals like MEG or industrial sugars from non food biomass. And last but not least, a very exciting technologies, where we turn CO2 from the air into chemicals via electro chemistry. But not too much to talk about these two business units because Carlos, myself and Daria are all working in the Avantium Catalysis, which was founded in 20 years ago and it's still the founding...the fundamental of Avantiums technology innovations. We are actually providing their We are a service provider in accelerating the research in your company in the catalysts research, to be more more specific. And we offer there, as you can see on the right hand side, systems services and a service called refinery catalyst testing. And what we help companies really to develop the R&D, as you see at the bottom for this. But this is enough about Avantium. Let's talk a bit how we are developing how we are working in projects and how JMP actually can help us there to accelerate the stuff and get better data out of it, which Carlos then later on the show in a demo for us. As mentioned before, we are a service provider and as a service provider, we get a lot of requests from customers to actually develop better catalysts, or better process. And now you might ask yourself, what's the catalyst. A catalyst is actually a material which participates in a reaction when you transform A to A, but doesn't get consumed in a reaction. The most common example of people, which you can see in your day-to-day life is, for example, the exhaust gas catalyst which is installed in your car, which turns off gases from your ...from your car into CO2 and water as an exhaust. And this is things which we get as requests. People come to us and say, "Oh, I would like to develop a new material," or things like, "I have this process, and I want to come with...accelerate my research and Develop a new process for this." And what they use there is when we have an experiment in our team, we are designing experiments of... designing experiments. We are trying to optimize the testing for this and is all we use JMP, but this is not what we want to talk today about. Because as I said before, we are using JMP also to actually merge our data, process them and make them ready for things, which is the two parts, which you see at the bottom of the presentation. We are executing research projects for customer in our proprietary tool called Flowrence, where the trick is that we don't experiment...don't execute tests, one after another, but we execute in parallel. Traditionally, I mean, I remember myself in my PhD, you execute a test one reactor after another, after another, after another. But we are applying up to 64 reactors in parallel, which makes the execute more challenging but allows a data-driven decision. It allows actually to make more reliable data and make them statistically significant. And then we are reporting this data to our customers, which then can either to continue in their tools with their further insights or completely actually rely on us for executing this data and extracting the knowledge. But yeah, enough about the company. And now let me hand over to Carlos, which will explain how JMP and JMP script actually helps us to make us our life significantly easier. Thank you, Hendrik,for the nice introduction. And thank you also for the organizers for this nice opportunity to participate in the JMP discovery summit. So as Hendrik was mentioning, we develop and execute research projects for third parties. And if we think about it, we need to go from design of experiments (and that's of course one very powerful feature from JMP), but also we need to manage information and in this case, as Hendrik was was mentioning, we want to focus on JSL script that allows us to easily handle information and create seamless integration of a process workflows. I'm a project leader in the R&D department and so a day...a regular day in my life here would look something like this. And so very simplistic view. You would have clients who are interested and have a research question and I design experiments and we execute these in our own proprietary technology called Flowrence. So in a simple view the data generated in the Flowrence unit will go through me after some checks and interpretation will goes back to the client. But the reality is somewhat more complex and on one hand, we also have internal customers. That is part of...for example our development team...business development team. And on the other side, we also have our own staff that actually interacts directly with the unit. So they control how the unit operates and monitor everything goes according to the plan. And the data, as you see here with broken lines, the data cannot be struck directly from the unit. The data is actually sent to a data warehouse and then we need a set of tools that allows us to first retrieve information, merge information that comes from different sources, execute a set of tasks that go from cleaning, processing, visualizing information, and eventually we export that data to the client so that the client can get the information that they actually need and that is most relevant for them. If you'll allow me to focus for one second on these different tasks, what we observed initially in the retrieve a merge is that data can actually come from various sources. So in the data warehouse, we actually collect data from the Florence unit, but we also collect data from the analyzer. So for those that they're performing tests in a laboratory, you might be familiar with the mass spectrometry or gas chromatography, for example, and we also collect data on the unit performance. So we also verify that the unit is is behaving as expected. In...as in any laboratory, we would also have manual inputs. And these could be, for example, information on the catalysts that we are testing or calibration of the analytical equipment. Those manual inputs are always of course stored in a laboratory notebook, but also we include that information into an Excel file. And this is where JMP is actually helping us drive the work flow of information to the next level. So what we have developed is a combination of an easy to use vastly known Excel file with powerful features from a JSL script. And not only we include manual data that is available in laboratory notebooks, but we also include in this Excel file formulas that are then interpreted by the JSL script and executed. That allows us to calculate key performance parameters that are tailored or specifically suited for different clients. If we look in more detail into the JSL script, and in a moment I will go into a demo, you will observe that the JSL script has three main sections. One section will prepare the local environment. So on one side we would say we want to clear all the symbols and close tables, but probably the most important feature is when we define "names default to here." So that would allow us actually to run parallel scrapes without having any interference between variables that are named the same in different scripts. Then we have section that is collapsed in this case so that we can show it actually that creates a graphical user interface. And then the user does not interact with the script itself, but actually works through a simple graphical user interface with the buttons that have descriptive button names. And then we have a set of tasks that are already coded in the script. In this case, they are in the form of expressions. Because well, it has two main advantages. One would be a it's easy to later on implement on the graphical user interface. And second, when you have an expression, you can use this expression several times in your code. OK, so moving on into the demo simulation. So I mentioned earlier that we have different sources of data. And on one side we have data that is in fact... that is in fact stored in our database. And this database will contain probably different sources of information, like the unit or different analyzers. In this case, you will see or you see an example Excel table. This only for illustration. So this data is actually taken from the data warehouse directly with our JSL script. So we don't look at this Excel table as a search. We let the software collect the information from the data warehouse. And probably what is most important is that this data, as you see here, can come again from different analyzers, and we're structuring somehow that the first column contains divided names. In this case, we have made some domain names. So, for reasons of confidentiality, but also you will see that all the observations are arranged in rows. So every single row is an observation. And depending on the type of test and the unit we are using, we could think that overall in one day we can collect up to half a million data points in one single day. That depends of course on the analyzer, but you immediately are faced with the amount of data that you have to handle and how JSL script that helps you process information can help you with this activity. Then we also use another Excel file. And this one is also very important, which is an input table file. And this files, specifically with the JSL script, are the ones creating the synergy to allows us to process data easy. What you see in this case, for example, is a reactor loading table and we see different reactors with different catalysts. And this information that seems... is not quantitative, but the qualitive the value is important. And then if we move to a second tab, and these steps are all predefined across our projects, we see the response factors for the analyzers. Different analyzers will have different response factors and it's important to log this information into use through the calculations to be able to get quantitative results. In this case, we observed that the condition that the response factors are targeted by condition instead. Then we have a formula tab. And this is probably a key tab for our script. You can input formulas in this Excel file. You make sure that the variable names are enclosed into square brackets. And the formula, you can use any formula in Excel. Anyone can use Excel; we're very much used to it. So if you type a formula here, that follows ??? syntax in Excel, it will be executed by our JSL script. Then we also included an additional feature we thought it was interesting to have conditionals. And for the JSL script to read this conditional, the only requirement is that the conditionals are enclosed in braces. There are two other tabs I would like to show you, which are highly relevant. One is a export tables tab and the reason that we have this table is because we generate many columns or many variables from my unit, probably 500 variables. But actually the client is only interested in 10, 20 or 50 of them. Those are the ones that really add value to their research. So we can input those variables here and send it to the client. And last but not least, I think many of us have been in that situation where we send an email to a wrong address and that can be actually something frightening when you're talking about confidential information. So we always double, triple check the email addresses and but does it...is it really necessary? So what we are doing here is that we have one Excel file that contains all manual inputs, including the email address of our clients. And these email addresses are fixed so there is no room for error. Whenever you have run the JSL script the right email addresses will be read and the email will be created and these we will see in one minute. So now going into the JSL script, I would like to highlight the following. So the JSL script is initially located in one single file in one single folder and the JSL script only needs one Excel file to write that contains different tabs that we just saw in the previous slide Once you open the JSL script, you can click on the run script button and that will open the graphical user interface that you see on the right. Here we have different options. In this case we want to highlight the option where we retrieve data from a project in that given period. We have selected here only one day this year, in particular, and then we see different buttons that allows us to create updates, for example. Once we have clicked on this button, you will see to the left on the folder that two directories were created. The fact that we create these directories automatically help us to have harmony or to standardize how is a folder structured also across our projects. If you look into the raw database data, you will see the two files were created. One contains the raw data that comes directly from the data warehouse. And the second, the data table contains all merge information from the Excel file and different tables that are available in the data warehouse. The exported files folder does not contain anything at this moment, because we have not evaluated and assessed the data that we created in our unit is actually relevant and valuable for the client. We do this, we are, we ??? and you see here that we have created a plot of reactor temperature versus the local time. And different reactors would be plotted so we have up to 64 in one of our units. And in this case we color the reactors, depending on the location on the unit. Another tab we have here, as an example, is about the pressure. And you see that you can also script maximum target and minimum values and define, for example, alerts to see if value is drifting away. The last table I want to show is a conversion and we see here different conversions collapsed by catalyst. So once we click the export button, we will see that our file is attached into an email and the email already contains the addresses...the email addresses we want to use. And again, I want to highlight how important it is to send the information to the right person. Now this data set is actually located into the exported files folder, which was not there before. And we always can keep track of what information has been exported and sent to the client. With this email then it's only a matter of filling in the information. So in this case, it's a very simple test. So this is your data set, but of course we would give some interpretation or gave maybe some advice to the client on how to continue the tests. And of course, once you have covered all these steps you will close the graphical user interface and that will also close all open tables and the JSL script. Something that I would like to highlight at this point is that these workflow using a JSL script is is rather fast. So what you saw at this moment, of course, it's a bit accelerated because it's only a demonstration, but you don't spend time looking for data and different sources, trying to merge them with the right columns. All these processes are integrated into a single script and that allows us to report to the client on a daily basis amounts of data that otherwise would be would...would not be possible. And the client can actually take data driven decisions with a very fast pace. That's probably the key message that I want to deliver with with this script that we see at this moment. Now, well, I would like to wrap up the presentation with with some concluding remarks and some closing remarks. And so on one side, we developed a distinctive approach for data handling and processing. And when we say distinctive it's because we have created a synergy between an Excel file that most people can use because you are very familiar with Microsoft Office and a JSL script which doesn't need any effort to run. So you click Run, you will get a graphical user interface and a few buttons to execute tasks. Then we have a standardized workflow. And that's also highly relevant when you work with multiple clients and also also from a practical point of view. For example, if one of my colleagues would go on holiday, it will be easy for another project leader for myself, for example, to take over the project and know that all the folder structures are the same, that all the scripts are the same and the buttons execute the same actions. Finally, we can also...we can guarantee seamless integration of data and these fast updates of information with thousands or even half a million data points per day can be quickly sent to clients and then this allows them to take almost online data driven decisions. At the end, our purpose is to maximize the customer satisfaction through a consistent, reliable and robust process. Well, with this, I would like to thank, again, the organizers of these discovery summit. Of course, to all our colleagues at Avantium, who have made this possible, especially to those that have worked intensively on the development of these scripts. And if you are curious about our company or the work we do in Catalysis, please visit one of the links you see here. And with this, I'd like to conclude, thank you very much for for your attention. And yeah, we look forward to your questions.  
Kelci Miclaus, Senior Manager Advanced Analytics R&D, JMP Life Sciences, SAS   Reporting, tracking and analyzing adverse events occurring to patients is critical in the safety assessment of a clinical trial. More and more, pharmaceutical companies and the regulatory agencies to whom they submit new drug applications are using JMP Clinical to help in this assessment. Typical biometric analysis programming teams may create pages and pages of static tables, listings and figures for medical monitors and reviewers. This leads to inefficiencies when the doctors that understand medical impacts of the occurrence of certain events can not directly interact with adverse event summaries. Yet even simple count and frequency distributions of adverse events are not always so simple to create. In this presentation we focus on key reports in JMP Clinical to compute adverse event counts, frequencies, incidence, incidence rates and time to event occurrence. The out of the box reports in JMP Clinical allow fully dynamic adverse event analysis to look easy even while performing complex computations that rely heavily on JMP formulas, data filters, custom-scripted column switchers and virtually joined tables.      Auto-generated transcript...   Speaker Transcript Kelci J. Miclaus Hello and welcome to JMP Discovery Online. Today I'll be talking about summarizing adverse event summaries and clinical trial analysis. I am the Senior Manager in the advanced analytics group for the JMP Life Sciences division here at SAS, and we work heavily with customers using genomic and clinical data in their research. So before I go through the summarizing and the details around using JMP with adverse event analyses, I want to introduce the JMP Clinical software which our team creates. JMP Clinical is one of the family of products that includes now five official products as well as add ins, which can extend JMP to really allow you to have as many types of vertical applications or extensions of JMP as you want. My development team supports JMP Genomics and JMP Clinical. JMP Genomics and JMP Clinical are respectively vertical applications that are customized, built on top of JMP, that are used for genomic research and clinical trial research. And today I'll be talking about how we've created reviews and analyses in JMP Clinical for pharmaceutical industries that are doing clinical trials safety and early efficacy analysis. The original purpose of JMP Clinical and the instigation of this product actually came through assistance to the FDA, which is a heavy JMP user And their CDER group, the Center for Drug Evaluation and Research. Their medical reviewers were commonly using JMP to help review drugs submissions. And they love it. They're very accomplished with it. One of the things they found though is that certain repetitive actions, especially on very standard clinical data could be pretty painful. Example here is the idea of something called a shift plot which is for laboratory measurements where you compare the trial average of a laboratory of versus the baseline against treatment groups. In order to create this, it took at least eight to 10 steps within the JMP interface of opening up the data, normalizing the data, subsetting it out into baseline versus trial, doing statistics, respectively, for those groups merging it back in, then splitting that data by lab tests so you could make this type of plot for each lab. And that's not even to get to the number of steps within Graph Builder to build it. So JMP clearly can do it, but what we wanted to do is solve their pain at this very standard type of clinical data with a one-click lab shift plots, for example. In fact, we wanted to create clinical reviews in our infrastructure that we call the review builder that are one-click standardized reproducible reviews for many of the highly common standard analyses and visualizations that are required or expected in clinical trial research to evaluate drug safety and efficacy. So JMP Clinical has evolved since that first instigation of creating a custom application for a shift plot into a full-service clinical...clinical trial analysis software that covers medical monitoring and clinical data science, medical writing teams, biometrics and biostatistics, as well as data management around the study data involved with clinical trial collection. This goes for both safety and efficacy but also operational integrity or operational anomalies that might be found in the collection of clinical data as well. Some of the key features around JMP Clinical that we find to be especially useful for those that are using the JMP interface for any types of analyses are things like virtual joins. So we have an idea of a global review subject filter, which I'll show you during the demonstrations for adverse events, that really allow you to integrate and link the demography information or the demographics about our subjects on a clinical trial to all of the clinical domain data that's collected. And this architecture, which is enabled by virtual joins within the JMP interface with row state synchronization, allow you to really have instantaneous interactive reviews with very little to no data manipulation across all the types of analyses you might be doing in a clinical trial data analysis. Another new feature we've added to the software that also leverages some of the power of the JMP data filter, as well as creation of JMP indicator columns, is this ability to, while you're interactively reviewing clinical trial data, find interesting signals that say, in this example, the screenshot shown is subjects that had a serious adverse event while on the clinical trial, find those interesting signals, and quite immediately, create an indicator flag that is stored in metadata with your study in JMP Clinical that's available for all other types of analyses you might do. So you can say, I want to look now at my laboratory results for patients that had a serious adverse event versus those that didn't to see if there's also anomalies that might be related to an adverse event severity occurrence. Another feature that I'll also be showing with JMP Cclinical and the demonstration around adverse event analysis is the JMP Clinical API that we've built into the system. One of the most difficult things of providing and creating and developing a vertical application that has out-of-the box one-click reports is that you get 90% of the way there and then the customer might say, oh, well, I really wanted to tweak it, or I really wanted to look at it this way, or I need to change the way the data view shows up. So one of the things we've been working hard on in our development team is using JMP scripting JSL to surface an API into the clinical review, to have control over the objects and the displays and the dashboards and the analyses and even the data sets that go into our clinical reviews. So I'll also be showing some of that in the adverse event analysis. So let's back up a little bit and go into the meat of adverse events and clinical trials now that we have an overview of JMP Clinical. There's really two kind of key ways of thinking of this. There's that safety review aspect of a clinical trial where that's typically counts and percentages of the adverse events that might occur. And a lot of the medical doctors, monitors, or reviewers often use this data to understand medical anomalies, you know, a certain adverse event starts showing up more commonly, with one of the treatments that could have medical implications. There's also the statistical signal detection, the idea of statistically assessing our adverse events occurring at an unusual rate in one of the treatment groups versus the other. So here, for example, is a traditional static table that you see in many of the types of research or submissions or communications around a clinical trial adverse event analysis. Basically it's a static table with counts percents and if it is more statistically oriented, you'll see things like confidence intervals and p values as well around things like odds ratios or a relative risks or rate differences. Another way of viewing this can also be visually instead of with a tabular format so signal detection, looking at say odds ratio or the, the risk difference might use the Graph Builder in this case to show the results of a statistical analysis of the incidence of certain adverse events and how they differ between treatment groups, for example. So those are two examples. And in fact, from the work we've done and the customers we've worked with around how they view and have to analyze adverse events, the JMP Clinical system now offers several common adverse event analyses from simple counts and percentages to incidence rates or occurrences into statistical metrics such as risk difference, relative risk, odds ratio, including some exposure adjusted time to event analyses. We can also get a lot more complex with the types of models we fit and really go into mixed or Bayesian models as well in finding certain signals with our adverse event differences. And also we use this data heavily in reviewing just the medical data in either a medical writing narrative or patient profile. So now I'm going to jump right into JMP Clinical with a review that I've built around many of these common analyses. So one of the things you'll notice about JMP Clinical is it doesn't exactly look like JMP, but it is. It's a combined integrated solution that has a lot of custom JSL scripting to build our own types of interfaces. So our starter window here lays out studies, reviews, and settings, for example. And I already have a review built here that is using our example nicardapine data. This is data that's shipped with the product. It's also available in the JMP sample library. It's a real clinical trial, looking at subarachnoid hemorrhage. It was with about 900 patients. And so what this first tab of our review is looking at is just the distribution of demographic features of those patients, how many were males versus females, their race breakdowns, what treatment group they were given, their sites that the data was taken from, etc. So this is very common, just as the first step of understanding your clinical data for a clinical trial. You'll notice here we have a report navigator that shows the rest of the types of analyses that are available to us in this built review. I'm going to walk through each of these tabs, just quickly to show you all the different flavors of ways we can look at adverse events with the clinical trial data set. Now, the typical way data is collected with clinical trials is an international standard called CDISC format, which typically means that we have a very stacked data set format. Here we can see it, where we have multiple records for each subject indicating the different adverse events that might have occurred over time. This data is going to be paired with the demography data, which is one row per each subject as seen here in this demographic. So we have about 900 patients and you'll see in this first report, we have about 5,000 or 5,500 records of different adverse events that occurred. So this is probably the most commonly used reports by many of the medical monitors and medical reviewers that are assessing adverse event signals. What we have here is basically a dashboard that combines a Graph Builder counts plot with an accompanying table, as they are used to seeing these kind of tables. Now the real value of JMP is its interactivity and that dynamic link directly to your data so that you can select anywhere in the data and see it in both places. Or more powerfully, you can control your views with column switchers. Now here we can actually switch from looking at distribution of treatments to sex versus race. You'll notice with race, if we remember, we had quite a few that were white in this study, so this isn't a great plot when we look at it by percent or by counts, so we might normalize and show percents instead. And we can also just decide to look at the overall holistic counts of adverse events as well. Another part of using this as this column switcher is the ability to you know categorize what kind of events those were. Was it a serious adverse event? What was the severity of it? Was the outcome that they are when they recovered from it or not? What was causing it? Was it related to study drug? All of these are questions that medical reviews will often ask to find interesting or anomalous signals with adverse events in their occurrences. Now one of the things you might have already noticed in this dashboard is that I have a control group as column switcher here that's actually controlling both my graph and my table. So when I switched to severity, this table switches as well. This was done with a lot of custom JSL scripting specifically to our purposes, but I'll tell you a secret, in 16 the developer for column switcher is going to allow us to have this type of flexibility so you can tie multiple platform objects into the same columns switcher to drive a complex analysis. I'm going to come back to this occurrence plot, even though it looks simple. Here's another instance of it that's actually looking at overall occurrence where certain adverse events might have occurred multiple times to the same subject. I'm going to come back to these but kind of quickly go through the rest of the analyses and these reviews before coming back to some of the complexities of the simple graph builder and tabulate distribution reports. The next section in our review here is an adverse event incident screen. So here we're making that progression from just looking at counts and frequencies or possibly incidence rates into more statistical framework of testing for the difference in incidence of certain adverse events in one treatment group for another. And here we are representing that with a volcano plot. So we can see actually that phlebitis, hypotension and isothenuria occur much more often in our treatment group, those that were treated with nicardipine, versus those on placebo. So we can actually select those and drill into a very common view for adverse events, which is our relative risk for a cell plot as well, which is lots of lot of times still easier to read when you're only looking at those interesting signals that have possibly clinical or statistical significant differences. Sometimes clinical trials take a long time. Sometimes they're on them for a few weeks, like this study was only a few weeks, but sometimes they're on them for years. So sometimes it's interesting to think of adverse event incidents differences as the trial progresses. We have this capability as well within the incidence screen report where you can actually chunk up the study day, study days into sections to see how the incidents of adverse events change over time. And a good way to demonstrate that might be with an exploding volcano plot here that shows how those signals change across the progression of the study. So another powerful idea with this, especially as you have longer clinical trials or more complex clinical trials, is instead of looking at just direct incidence among subjects you can consider their time to event or their exposure adjusted rate at which those adverse events are occurring. And that's what we offer within our time to event analyses, which once again, shown in a volcano plot looking here using a Kaplan Meier test at differences in the time to event of certain events that occur on a clinical trial. One of the nice things here is that you can select these events and drill down into the JMP survival platform to get the full details for each of the adverse events that had perhaps different time to event outcomes between the treatment groups. Another flavor of time to event is often called an incidence density ratio, which is the idea of exposure adjusted incidence density. Basically the difference here is instead of using some of the more traditional proportional hazards or Kaplan Meier analyses, this is more like a a poisson style distribution that's adjusted for how long they've actually been exposed to a drug. And once again here we can look at those top signals and drill down to the analogous report within JMP using a generalized linear model for that specific type of model with an adverse event signal detection. And we actually even offer some really complex Bayesian analyses. So one of the things with with this type of data is typically adverse events exist within certain body systems or classes...organ classes. And so there is a lot of posts...or prior knowledge that we can impose into these models. And so some of our customers, their biometrics teams decide to use pretty sophisticated models when looking at their adverse events. So, so far we've walked from what I would say consider pretty simplistic distribution views of the data into distributions and just count plots of adverse events into very complex statistical analyses. I'm going to come back now, back to what is that considered simple count and frequency information and I want to spend some time here showing the power of JMP interactivity that we have. As you recall one of the differences here is that this table is a stacked table that has all of the occurrences of our adverse events for each subject, and our demography table, which we know we have 900 subjects, is separate. So what we wanted was not a static graph, like we have here, or what we would have in a typical report in a PDF form, but we wanted to be able to interactively explore our data and look at subgroups of our data and see how those percentages would change. Now, the difficulty is that the percent calculation needs to come from the subject count in a different table. So we've actually done this by formula...like creating column formulas to dynamically control recalculation of percents upon selection, either within categorizing events or, more powerfully, using our review subject filter tool. So here for example, we're looking at all subjects by treatment. Perhaps serious versus not serious adverse events, but we can use this global data filter which affects each of the subject level reports in our review and instantaneously change our demography groups and change our percentages to be interactive to this type of subgroup exploration. So here, now we can actually subgroup down to white females and see what their adverse event percentage and talents are, or perhaps you want to go more granular and understand for each site, how their data is changing for different sites. So what we really have here is instead of a submission package or a clinical analysis where the biometrics team hands 70 different plots and tables to the medical reviewer to go through, sift through, they have the power to create hundreds of different tables and different subsets and different graphics, all in one interface. In fact, you can really filter down into those interesting categories. So if they were looking say at serious adverse events and they wanted to know serious adverse events that were related to drug treatment very quickly, now we got down to a very small subset from our 900 patients to about nine patients that experienced serious adverse events that were considered related to the treatment. So as a medical reviewer this is a place where Ithen might want to understand all of the clinical details about these patients. And very quickly, I can use one of our action buttons from the report to drill down to what's called a kind of a complete patient profile. So here we see all of the information now, instead of at a summary level, at a subject individual level of everything that occurred to this patient over time, including when they had serious adverse events occur and their laboratory or vital measurements that were taken alongside of that. One of the other main uses of our JMP Clinical system along with this medical review, medical monitor is medical writing teams. So another way of looking at this instead of visually in a graphic or even in a table which these are patient profile tables, you can actually go up here and generate an automated narrative. So here we're going to actually launch to our adverse event narrative generation. Again, one of the benefits and values of our JMP Clinical being a vertical application relying on standard data is that we get to know all the data and the way it is formatted up up up front, just by being pointed to the study. So what we can do here is actually run this narrative that is going to write us the actual story of each of those adverse events that occurred. And this is going to open up a Word doc that has all of the details for this subject, their demography, their medical history, and then each of the adverse events and the outcomes or other issues around those adverse events. And we can do this for one patient at a time or we can actually even do this for all 900 patients at a time and include more complex details like laboratory measurements, vitals, either a baseline or before. And so, medical reviewers find this incredibly valuable be able to standardly take data sources and not make errors in a data transfer from a numeric table to an actual narrative. So I think just with that you can really see some of the power of these distribution views, these count plots that allow you to drill into very granular levels of the data. This ability to use subject filters to look either within the entire population of your patients on a clinical trial or within relevant subgroups that you may have found. Now one thing about the way our global filter works through our virtual joins is this is only information that's typically showing the information about the demography. One of the other custom tools that we've scripted into this system is that ability to say, select all subjects with a serious adverse event. And we can either derive a population flag and then use that in further analyses or we can even throw that subject's filter set to our global filter and now we're only looking at serious...at a subject who had a serious adverse event, which was about...almost 300 patients on the clinical trial had a serious adverse event. Now, even this report, you'll see is actually filtered. So the second report is a different type of aspect of a distribution of adverse events that was new in our latest version which is incidence rates. And here, the idea is instead of normalizing or dividing to calculate a percent by the number of subjects who had an event. If you are going with ongoing trials or long trials or study trials across different countries that have different timing startup times, you might want to actually look at the rate at which adverse events occur. And so that's what this is calculating. So in this case, we're actually subset down to any subjects that had a serious adverse event. And we can see the rate of occurrence in patient years. So for example, this very first one, see, has about a rate of 86 occurrences in every 10 patient years on placebo versus 71 occurrences In nicardipine. So this was actually one which this was to treat subarachnoid hemorrhage, intracranial pressure increasing likely would happen if you're not being treated with an active drug. These percents are also completely dynamic, these these incidence rates. So once again, these are all being done by JMP formulas that feed into the table automatically that respect different populations as they're selected by this global filter. So we can look just within say the USA and see the rates and how they change, including the normalized patient years based on the patients that are from just the USA, for example. So even though these reports look pretty simple, the complexity of JSL coding that goes beyond building this into a dashboard is basically what our team does all day. We try to do this so that you have a dashboard that helps you explore the data as you know, easily without all of these manipulations that could get very complex. Now the last thing I wanted to show is the idea of this custom report or customized report. So this is a great place to show it too, because we're looking here at adverse events incidence rates. And so we're looking by each event. And we have the count, or you can also change that to that incidence rate of how often it occurs by patient year. And then an alternative view might be really wanting to see these occurrences of adverse events across time. And so I want to show that really quick with our clinical API. So the data table here is fully available to you. One of the things I need to do first off is just create a numeric date variable, which we have a little widget for doing that in the data table, and I'm going to turn that into a numeric date. Now you'll notice now this has a new column at the end of the numeric start date time of the adverse event. You'll also notice here is where all that power comes from the formulas. These are all actually formulas that are dynamically regenerated based on populations for creating these views. So now that we have a numeric date with this data, now we might want to augment this analysis to include a new type of plot. And I have a script to do that. One of the things I'm going to do right off the bat is just create a couple extra columns in our data set for month and year. And then this next bit of JSL is our clinical API calls. And I'm not going to go into the details of this except for that it's a way of hooking ourselves into the clinical review and gaining access to the sections. So when I run this code, it's actually going to insert a new section into my clinical review. And here now, I have a new view of looking at the adverse events as they occurred across year by month for all of the subjects in my clinical trial. So one of the powers, again, even with this custom view is that this table by being still virtually joined to our main group can still fully respond to that virtual join global subject filter. And so just with a little bit of custom API JSL code, we can take these very standard out-of-the-box reports and customize them with our own types of analyses as well. So I know that was quite a lot of an overview of both JMP Clinical but, as well as the types of clinical adverse event analyses that the system can do and that are common for those working in the drug industry or pharma industry for clinical trials, but I hope you found this section valuable and interesting even if you don't work in the pharma area. One of the best examples of what JMP Clinical is is just an extreme extension and the power of JSL to create an incredibly custom applications. So maybe you aren't working with adverse events, but you see some things here that can inspire you to create custom dashboards or custom add ins for your own types of analyses within JMP. Thank you.  
Monday, October 12, 2020
Melissa Reed, MS Business Analytics and Data Science, Oklahoma State University   This project is about Early Presidential Primaries and how the results from those primaries affect who wins the Presidency. This research will focus on the Presidential Primaries where a new President was elected, so that would be the elections of 1992, 2000, 2008, and 2016. The elections in 2000, 2008, and 2016 will be focused on because no incumbent running, however, the election of 1992 Bill Clinton defeated the current President George H. W. Bush to win the Presidency. The election of 1992 will be focused on because George H. W. Bush is the only President that did not get re-elected since the Cold War ended in 1991. The specific primaries that will be focused on are the Iowa Caucus, the New Hampshire Primary, and Super Tuesday, because they are the primaries that help predict the rest of the country’s primaries since they are early in the election cycle. The hypothesis for this research is that the candidate that wins most of the Early Presidential Primaries wins the Candidacy and the Presidency. JMP software will be used to test the hypothesis.  The research concluded that the person who wins the most primaries, will most likely win the Party Candidate but will not always win the Presidency.     Auto-generated transcript...   Speaker Transcript melissareed Hello, my name is Melissa Reed and I will be presenting about my poster, and it is about the early presidential primaries. I am from Oklahoma State University. So a little bit of background about the early presidential primary is that a lot of people aspire to be the President of the United States and not actually...not a lot of people actually run for it. And the campaigns usually start about two years before the November election, but a lot of campaigns do not make it to the Republican and Democratic National Conventions for a number of reasons, because they didn't either get enough votes or a lot of time, they run out of money beforehand. The early presidential primaries that this poster focuses on are the Iowa caucus, the New Hampshire primary and Super Tuesday. The reason that these were chosen is because they are three early primaries and typically, the way that these go, the rest of the country will follow. And they are just really important. So the hypothesis for this project is the person who wins the most votes during the early presidential primaries will more than likely win the Democratic or Republican candidacy for the President of the United States. Looking at the elections of 1992, 2000, 2008, and 2016, they were focused on because a new president won the President... won the Office of the President of the United States. 1992 is focused on because because President Bill Clinton defeated the current president George HW Bush, and George HW Bush was the first president since the Cold War ended not to be reelected. 200, 2008, and 2016 are focused on because there were no incumbents running. You can see on the poster that in 1992 and 2000 and 2016, the candidate that won the Democratic and Republican candidacy for the President of the United States were the two people that had the most votes out of those three early presidential primaries. However, in 2008 Barack Obama and Hillary Clinton were the top two candidates that got the most amount of votes, but because they are both Democrats they could not both get the candidacy, and the Republicans named John McCain. So to do the analysis, I used to JMP to run a correlation analysis and a logistic regression. I ran the coordination analysis between the year and how many votes were cast to see if there was a connection between them and the correlation analysis to prove that the year, there's a connection between the year and the amount of votes that were cast. I ran the logistic regression between the candidate, the year and a state primaries to see who was most like...most likely candidate was to beat the other candidate. The results of those elections...the results of that regression are down below in the result sections. Now in 1992 the New Hampshire primary was the one that I focused on because George HW Bush did not have anyone running against him in the Iowa Caucus, so so I chose New Hampshire for that one. And the rest of the elections from 1992, 2000, 2008 and 2016, the logistic regression showed that the person who is most likely the win isn't the candidate who's actually the person to get the most votes. In 2008, Barack Obama was shown to win some elections against him and Hillary Clinton, but not against everyone else. In conclusion, the person that wins the most votes the Iowa Caucus, the New Hampshire primary and Super Tuesdays will most likely win the candidacy for the Democratic and the Republican and they will run for the President of United States. Now in 2008 there was a difference because the two people that won the most votes in those primaries were two Democrats. Thank you so much.