This talk takes you through the journey of developing and testing a JMP add-in. First, the initial statistical problem is introduced, which was designing discrete choice experiments tailored to different demographic groups so that the usability of JMP Easy DOE could be analyzed.

Next, the process for developing an add-in using the JMP scripting language (JSL) is outlined, followed by an explanation of how the add-in helped solve multiple instances of the statistical problem in a seamless and efficient manner.

The last step was to implement a process for validating the add-in to ensure it was operating as intended by using the JSL unit testing framework. The automated unit testing scripts written with the JSL unit testing framework was used to see if any bugs or errors were introduced, significantly aiding the development process. 

 

 

Hello, everyone. My name is William Fisher. Today, I'm going to give a talk on my journey of creating and validating a JMP add-in. Just to give a brief introduction of today's talk, I want to first reiterate that while we all know that it's a wide range of capabilities, there can be some problems that JMP cannot directly or conveniently solve through its current platforms.

An example of this, of such a problem, is one problem I worked on during my summer internship last year, which was I evolved a problem of designing a so-called contextual discrete choice experiment. On JMP's current platforms, I wasn't able to directly design these types of experiments. Through JSL scripting, which is JSL is JMP's proprietary scripting language, I was able to create an application or an add-in that is able to extend the functionalities of JMP to solve the problem that I had.

On top of this, I was creating this application or add-in, hoping that other people would use it and that people may use it for a long period of time across different versions of JMP, potentially. One thing that's important when you're thinking long-term about developing an application is unit testing, which ensures it gives you a way to make sure that any changes that I, the developer make, or the changes that are made to JMP, won't create any bugs or affect the application we make. JMP has its own JSL unit testing framework that allows users to create unit tests for their application that they developed. That's the introduction of this talk.

Basically, the goals of today's talk are, I want to walk you through my the process of developing an application that can design the so-called contextual discrete choice experiments. After that, I want to walk you through my experience of creating a suite of unit tests to validate my application. Lastly, I want to walk you through how I actually used the add-in or application I made to design a real-world contextual discrete choice experiment. Let's begin then.

Before we create an add-in or an application, we always have to define and understand the underlying statistical problem that we're trying to solve. In my case, the problem I was trying to solve was designing contextual discrete choice experiments. A little bit of background on this particular type of experiment, I'll first go over what a classical discrete choice experiment is.

Classical discrete choice experiments are a type of experiment where subjects, people, are presented with a sequence of choice sets or questions, and in each choice set, they are asked to select their most preferred option. These options within each choice set are characterized by a set of design factors or attributes.

Within JMP, we have a choice design and choice analysis platform that provide experimenters a way to design and analyze these discrete choice experiments. On top of this, we have something… During my internship with JMP last year, we extended discrete choice experimentation by considering a new framework called Contextual Discrete Choice Experimentation.

In Contextual Discrete Choice Experimentation, we assume that our user group, their preferences may differ based on some contextual information. One example of contextual information is demographic information, such as the age group, country of origin, the user's experience level, so on and so forth. Contextual Discrete Choice Experimentation, we assume that user preferences can change depending on different contexts.

Designing contextual discrete choice experiments, what that involves is it involves gathering information on different demographic groups, preferences, and using that information to design discrete choice experiments that are tailored to the specific demographic groups so that we can gain more statistical information when we ask these different demographic groups, and when we present them with the different choice sets.

We want to somehow gather some prior information on users' preferences and use that information to design the subsequent studies. I'm going to present how I developed a tool to do this. Here's the workflow of the add-in I made. Basically, to get prior information on the different users' preferences, we gather data from a pilot study where this data will have information such as choice set identifiers, the choice set indicators, the options provided within each choice set, as well as information on the user's context, maybe some demographic information. This is all information coming from a pilot study.

Then, once we have this data set from a pilot study, what we'll do in this add-in is we want to specify a pilot model to analyze data from the pilot study. What we'll do here is we'll create a utility model that is based off of both the attributes of the things we're studying, as well as the user's contextual information. We use the discrete choice analysis platform and JMP to estimate the parameters of this model used for the pilot study.

Then, after that, we select a demographic group for further study. This involves specifying certain levels of these contextual factors. Once we specify the demographic group for further study, we gather information from the pilot model and incorporate that information into the Choice Design platform, and use that information to design a discrete choice experiment tailored to the demographic group that we wanted to study further. That's the workflow of my add-in.

On the right here, you can see that I have a demonstration of the add-in as well in video form. When developing this add-in, of course, for each of the parts of this workflow, I had to develop ways to bring in information, process it. I was able to use different platforms in JMP to do that. Of course, I also had, at certain points, had to make my own controls and user interface to tell the system how to process the data. There's a lot of resources for learning JSL scripting, such as the Scripting Guide, the Scripting Index, and JMP Itself. You can always save your work and JMP as a script, and you can learn a lot about writing JSL from that.

One other important aspect on top of development, is actually validating the application we made. JMP has its own unit testing framework. They're called the JSL Unit Testing Framework. Unit testing, it allows us to test individual components of our application or add-ins to identify errors. It also allows us to have a way to make sure that when we make changes to our application or when JMP updates to a new version, that there weren't any errors introduced into our system or our application.

JSL has a very nice unit testing framework that can be used to develop unit tests and test different parts of your application. I have a couple of examples of things that we tested. For example, in this application, we tested the output of the Cross button to make sure that it was correct, or that an error message would show up when the user forgot to specify the level of a certain contextual factor when they were selecting the group for further study. We also tested that the prior information we got from our pilot study was loaded correctly into the Choice Design platform once the user selected their demographic group for further study.

On top of this, unit testing really helps a lot when you are refactoring your code. One example that we found an error in the code in the part where the user selects a demographic group for further study. If the contextual factors had value labels, then what would happen is no option for that specific contextual factor would show up. Because we had written the unit tests before, what we could do is we could go back to our script and fix this bug, and we could run the unit tests we made to make sure that we didn't break something else. Having those unit tests really helped with the development process.

Lastly, the last thing I want to go over is actually we made this application or this add-in. Of course, to solve a certain statistical problem. Of course, at the end of the day, we had to go apply it and use it in a real-world setting. I'll briefly introduce, go over the problem we used this add-in to solve. During my summer internship, we were interested in learning user preferences for the layout of the Analyze tab in Easy DOE. What we were specifically studying was this certain feature in the Analyze tab called the Hover Help.

The Hover Help had four design factors. It had three messages, an Entered Message, a Significance Message, and a Heredity Message. This is all three different messages that would pop up telling you information about certain terms and statistical model. The fourth factor was the location of where this Hover Help would pop up when you hover over certain elements in the Analyze tab, which we had the confidence interval and the preview column as the two options there.

For our study, what we did to collect data is we first identified three demographic groups which were based off of two contextual factors. The two contextual factors were the user's experience level with experimental design and the way they use Easy DOE. We had three groups. One of them was new users who use Easy DOE for personal use, and the two others were experienced users in experimental design, but who use Easy DOE either for personal use or teaching others.

Like I said, for designing contextual discrete choice experiments, we need to pilot study. For our pilot study, we had 17 participants, we presented them with 8 questions each. For subsequent studies, our subsequent studies, we had a total of 41 participants, which included 20 new users and 21 experienced users who either did not teach others or did teach others.

Using our add-in, we were able to create Bayesian D-optimal choice experiments for these three demographic groups based on the data from the pilot study. The users of this subsequent study were presented with eight questions tailored to their our demographic group.

Running this study, we had some preliminary results that we identified for the user's preferences for the Hover Help in the Analyze tab. We found that all three groups preferred a specific form of the Heredity Message, which is X1 is entered to maintain model heredity as X1, X2 is entered. All three of the groups were in different to the form of the Significance Message.

Lastly, we found that there was a little bit of bifurcation in that new users and expert users who teach others preferred the other help to be located over the confidence interval, while expert users who did not teach others were different to the location of the Hover Help. We saw that there was some difference in user preferences across these different demographic groups. That was how we applied our tool in this specific setting.

As a conclusion of my talk today, I was able to show that JSL scripting enabled me to develop an easy-to-use application for designing contextual discrete choice experiments. Through JSL, I was able to extend the basic functionalities of JMP to create an application that's tailored to my specific a specific problem. On top of that, I was able to develop unit tests along the way that helped the development process by allowing me to refactor code and making sure that I didn't break something else when I refactored my code. Lastly, I showed that when you make this an application or an add-in, there's always a purpose behind it. I showed how I applied the add-in I developed to solve a certain statistical problem.

That is my talk for today. Like I said, there's some good resources on JSL scripting. Got the Scripting Guide, the Scripting Index, and JMP Itself. On top of that, there's also good resources that you can find online written by Joseph Morgan and Xan Gregg on automated unit testing in JMP. There's a good paper on that to get people started. Thank you.

Presented At Discovery Summit 2025

Presenter

Skill level

Intermediate
  • Beginner
  • Intermediate
  • Advanced

Files

Published on ‎07-09-2025 08:59 AM by Community Manager Community Manager | Updated on ‎10-28-2025 11:40 AM

This talk takes you through the journey of developing and testing a JMP add-in. First, the initial statistical problem is introduced, which was designing discrete choice experiments tailored to different demographic groups so that the usability of JMP Easy DOE could be analyzed.

Next, the process for developing an add-in using the JMP scripting language (JSL) is outlined, followed by an explanation of how the add-in helped solve multiple instances of the statistical problem in a seamless and efficient manner.

The last step was to implement a process for validating the add-in to ensure it was operating as intended by using the JSL unit testing framework. The automated unit testing scripts written with the JSL unit testing framework was used to see if any bugs or errors were introduced, significantly aiding the development process. 

 

 

Hello, everyone. My name is William Fisher. Today, I'm going to give a talk on my journey of creating and validating a JMP add-in. Just to give a brief introduction of today's talk, I want to first reiterate that while we all know that it's a wide range of capabilities, there can be some problems that JMP cannot directly or conveniently solve through its current platforms.

An example of this, of such a problem, is one problem I worked on during my summer internship last year, which was I evolved a problem of designing a so-called contextual discrete choice experiment. On JMP's current platforms, I wasn't able to directly design these types of experiments. Through JSL scripting, which is JSL is JMP's proprietary scripting language, I was able to create an application or an add-in that is able to extend the functionalities of JMP to solve the problem that I had.

On top of this, I was creating this application or add-in, hoping that other people would use it and that people may use it for a long period of time across different versions of JMP, potentially. One thing that's important when you're thinking long-term about developing an application is unit testing, which ensures it gives you a way to make sure that any changes that I, the developer make, or the changes that are made to JMP, won't create any bugs or affect the application we make. JMP has its own JSL unit testing framework that allows users to create unit tests for their application that they developed. That's the introduction of this talk.

Basically, the goals of today's talk are, I want to walk you through my the process of developing an application that can design the so-called contextual discrete choice experiments. After that, I want to walk you through my experience of creating a suite of unit tests to validate my application. Lastly, I want to walk you through how I actually used the add-in or application I made to design a real-world contextual discrete choice experiment. Let's begin then.

Before we create an add-in or an application, we always have to define and understand the underlying statistical problem that we're trying to solve. In my case, the problem I was trying to solve was designing contextual discrete choice experiments. A little bit of background on this particular type of experiment, I'll first go over what a classical discrete choice experiment is.

Classical discrete choice experiments are a type of experiment where subjects, people, are presented with a sequence of choice sets or questions, and in each choice set, they are asked to select their most preferred option. These options within each choice set are characterized by a set of design factors or attributes.

Within JMP, we have a choice design and choice analysis platform that provide experimenters a way to design and analyze these discrete choice experiments. On top of this, we have something… During my internship with JMP last year, we extended discrete choice experimentation by considering a new framework called Contextual Discrete Choice Experimentation.

In Contextual Discrete Choice Experimentation, we assume that our user group, their preferences may differ based on some contextual information. One example of contextual information is demographic information, such as the age group, country of origin, the user's experience level, so on and so forth. Contextual Discrete Choice Experimentation, we assume that user preferences can change depending on different contexts.

Designing contextual discrete choice experiments, what that involves is it involves gathering information on different demographic groups, preferences, and using that information to design discrete choice experiments that are tailored to the specific demographic groups so that we can gain more statistical information when we ask these different demographic groups, and when we present them with the different choice sets.

We want to somehow gather some prior information on users' preferences and use that information to design the subsequent studies. I'm going to present how I developed a tool to do this. Here's the workflow of the add-in I made. Basically, to get prior information on the different users' preferences, we gather data from a pilot study where this data will have information such as choice set identifiers, the choice set indicators, the options provided within each choice set, as well as information on the user's context, maybe some demographic information. This is all information coming from a pilot study.

Then, once we have this data set from a pilot study, what we'll do in this add-in is we want to specify a pilot model to analyze data from the pilot study. What we'll do here is we'll create a utility model that is based off of both the attributes of the things we're studying, as well as the user's contextual information. We use the discrete choice analysis platform and JMP to estimate the parameters of this model used for the pilot study.

Then, after that, we select a demographic group for further study. This involves specifying certain levels of these contextual factors. Once we specify the demographic group for further study, we gather information from the pilot model and incorporate that information into the Choice Design platform, and use that information to design a discrete choice experiment tailored to the demographic group that we wanted to study further. That's the workflow of my add-in.

On the right here, you can see that I have a demonstration of the add-in as well in video form. When developing this add-in, of course, for each of the parts of this workflow, I had to develop ways to bring in information, process it. I was able to use different platforms in JMP to do that. Of course, I also had, at certain points, had to make my own controls and user interface to tell the system how to process the data. There's a lot of resources for learning JSL scripting, such as the Scripting Guide, the Scripting Index, and JMP Itself. You can always save your work and JMP as a script, and you can learn a lot about writing JSL from that.

One other important aspect on top of development, is actually validating the application we made. JMP has its own unit testing framework. They're called the JSL Unit Testing Framework. Unit testing, it allows us to test individual components of our application or add-ins to identify errors. It also allows us to have a way to make sure that when we make changes to our application or when JMP updates to a new version, that there weren't any errors introduced into our system or our application.

JSL has a very nice unit testing framework that can be used to develop unit tests and test different parts of your application. I have a couple of examples of things that we tested. For example, in this application, we tested the output of the Cross button to make sure that it was correct, or that an error message would show up when the user forgot to specify the level of a certain contextual factor when they were selecting the group for further study. We also tested that the prior information we got from our pilot study was loaded correctly into the Choice Design platform once the user selected their demographic group for further study.

On top of this, unit testing really helps a lot when you are refactoring your code. One example that we found an error in the code in the part where the user selects a demographic group for further study. If the contextual factors had value labels, then what would happen is no option for that specific contextual factor would show up. Because we had written the unit tests before, what we could do is we could go back to our script and fix this bug, and we could run the unit tests we made to make sure that we didn't break something else. Having those unit tests really helped with the development process.

Lastly, the last thing I want to go over is actually we made this application or this add-in. Of course, to solve a certain statistical problem. Of course, at the end of the day, we had to go apply it and use it in a real-world setting. I'll briefly introduce, go over the problem we used this add-in to solve. During my summer internship, we were interested in learning user preferences for the layout of the Analyze tab in Easy DOE. What we were specifically studying was this certain feature in the Analyze tab called the Hover Help.

The Hover Help had four design factors. It had three messages, an Entered Message, a Significance Message, and a Heredity Message. This is all three different messages that would pop up telling you information about certain terms and statistical model. The fourth factor was the location of where this Hover Help would pop up when you hover over certain elements in the Analyze tab, which we had the confidence interval and the preview column as the two options there.

For our study, what we did to collect data is we first identified three demographic groups which were based off of two contextual factors. The two contextual factors were the user's experience level with experimental design and the way they use Easy DOE. We had three groups. One of them was new users who use Easy DOE for personal use, and the two others were experienced users in experimental design, but who use Easy DOE either for personal use or teaching others.

Like I said, for designing contextual discrete choice experiments, we need to pilot study. For our pilot study, we had 17 participants, we presented them with 8 questions each. For subsequent studies, our subsequent studies, we had a total of 41 participants, which included 20 new users and 21 experienced users who either did not teach others or did teach others.

Using our add-in, we were able to create Bayesian D-optimal choice experiments for these three demographic groups based on the data from the pilot study. The users of this subsequent study were presented with eight questions tailored to their our demographic group.

Running this study, we had some preliminary results that we identified for the user's preferences for the Hover Help in the Analyze tab. We found that all three groups preferred a specific form of the Heredity Message, which is X1 is entered to maintain model heredity as X1, X2 is entered. All three of the groups were in different to the form of the Significance Message.

Lastly, we found that there was a little bit of bifurcation in that new users and expert users who teach others preferred the other help to be located over the confidence interval, while expert users who did not teach others were different to the location of the Hover Help. We saw that there was some difference in user preferences across these different demographic groups. That was how we applied our tool in this specific setting.

As a conclusion of my talk today, I was able to show that JSL scripting enabled me to develop an easy-to-use application for designing contextual discrete choice experiments. Through JSL, I was able to extend the basic functionalities of JMP to create an application that's tailored to my specific a specific problem. On top of that, I was able to develop unit tests along the way that helped the development process by allowing me to refactor code and making sure that I didn't break something else when I refactored my code. Lastly, I showed that when you make this an application or an add-in, there's always a purpose behind it. I showed how I applied the add-in I developed to solve a certain statistical problem.

That is my talk for today. Like I said, there's some good resources on JSL scripting. Got the Scripting Guide, the Scripting Index, and JMP Itself. On top of that, there's also good resources that you can find online written by Joseph Morgan and Xan Gregg on automated unit testing in JMP. There's a good paper on that to get people started. Thank you.



Start:
Wed, Oct 22, 2025 05:15 PM EDT
End:
Wed, Oct 22, 2025 06:00 PM EDT
Ped 03
Attachments
0 Kudos