Choose Language Hide Translation Bar
jlambert
Level I

Exploring Interactions in Regression Models with JMP and the Feasible Solutions Algorithm (2021-US-45MP-841)

Level: Intermediate

 

Joshua Lambert, Assistant Professor, University of Cincinnati

 

During the process of building a regression model, scientists are sometimes tasked with assessing the effects of one or more variables of interest. With an additive regression model, effects (e.g., treatment) are assumed to be equal across all possible subgroups (e.g., sex, race, age). Checking all possible interaction effects is either too time consuming or impossible with a desktop computer. A new JMP add-in implements an algorithm, the Feasible Solutions Algorithm (FSA), which is meant to explore subgroup-specific effects in large data sets by identifying two- and three-way interactions to add to multivariable regression models. This talk gives a short introduction to the FSA , explains how I moved my R package to JSL, and provides a tutorial on how to use the FSA JMP add-in to explore interactions in your own regression models.

 

 

Auto-generated transcript...

 


Speaker

Transcript

Joshua W Lambert Hello.
ROBYN GODFREY hi I'm Robin me josh.
  Yes.
ROBYN GODFREY The neo.
Joshua W Lambert yeah nice to meet you too sorry I'm just opening up the PowerPoint here.
ROBYN GODFREY Okay.
  Do you have any co presenters.
Joshua W Lambert No.
ROBYN GODFREY All right, just going to go down the list here, so we are recording this so just so you verbally give me a verbal affirmation that you know you're being recorded.
  Yes.
ROBYN GODFREY cell phones Internet notifications are silenced.
Joshua W Lambert Yes.
ROBYN GODFREY yeah I think I'm the like the windows taskbar and all that and sure it's all like your email go off in the middle.
Joshua W Lambert And don't I think I've got everything turned off they're not ever had to do that before so.
ROBYN GODFREY I think it's in the taskbar um.
  let's see here.
  Just close your email and stuff it'll be okay.
  Okay.
Joshua W Lambert got the tacos.
ROBYN GODFREY Okay, great I'm gonna hide myself here.
  To do this, every time I do this, I have to, I have to like Okay, what did I do.
Joshua W Lambert So then.
  As far as the presentation goes I guess you guys will cut from when I start.
  yeah and what you have any questions that you're going to ask or.
ROBYN GODFREY No, no, so you'll just do your presentation like I'm you normally would.
  right and just you know you bring up your slides and everything.
  If you want to talk first and then bring up your slides that's fine if you need to start over or anything like that just let me know and we can we can do that no big deal.
  Okay you're you're a little clip like you're the camera, is there any way to back up just a little bit cuz you're like really close.
  I can't even like see your whole face, I can see like.
Joshua W Lambert to know I forgot.
ROBYN GODFREY yeah just a little bit.
  Or maybe lower it just a little bit your camera.
  that's good.
  that's good yeah, we can see more of your shoulders and stuff okay.
  So I'm going to go on mute and so just give me like a couple seconds, and then you can.
  start when you're ready okay.
  Great.
Joshua W Lambert Hello, everyone. My name is Josh Lambert. I'm an assistant professor at the University of Cincinnati and I've been a JMP user for about 10 years now.
  And I'm excited to share with you some work I've been doing around exploring interactions in regression models using an add-in I built that uses an algorithm I developed called the feasible solution algorithm.
  So
  some contents that we're going to talk about today. So I'm going to start off with a little example, which will motivate our discussion, as well as what I've been working on, an overview of the problem.
  A potential solution.
  And then discuss how I've implemented the solution into an R package and then moved that over to a JMP add-in and the process I took doing that. And then I'll talk about some future endeavors and some things I learned along the way.
  So let's motivate our discussion today with a little example. I'm going to call this Tom the Data Scientist. So meet Tom. He's a data scientist and Tom does a lot of typical data science activities, specifically Tom builds multivariable regression models in JMP.
  He mostly deals with tabular data that has many variables, and Tom realizes that these multivariable regression models lack complexity, specifically they lack interaction terms and quadratic terms.
  And Tom...that frustrates Tom, and Tom doesn't want to go right into machine learning.
  So he wishes that he had a way of exploring interaction effects in his regression models.
  he has interactions he'd like to test, but he would have to handcraft those into the fit model platform in JMP.
  And you know, having to run all of these is going to take a lot of time. So, for instance, if Tom has 200 variables in his data set and he needs to look for all two-way interactions, that's 19,900 two-way interactions to check. That's a few too many.
  So Tom wonders, is there an algorithmic and data-driven way to explore interaction effects in regression models without needing to handcraft them all by hand and the need to check all these possible combinations?
  So let's overview this problem in a little bit more of a mathematically and statistically rigorous way. So what we have in the problem is, we have a problem of volume and complexity.
  So volume and complexity
  is going to continue to grow at a really fast rate. We have more and more data coming available to us all the time, and the complexity of these data are also growing.
  The problem with interactions are is that there's just too many of them to check.
  Usually, a data scientists, statistician, scientist is going to have to walk in to the analysis with a known set of interactions to check
  and then proceed to check them. This is a problem for a number of ways...a number of reasons,
  which are that you may not have a good idea as to what interactions may or may not exist, and you'd like to be able to explore them in your data
  without having to go to models, such as random forest or even principal component analysis for doing some sort of data reduction, doing some more complexity there. You'd really like to be able to do things along the way as you build a regression model.
  So the other problem with random forest and principal components, as well as other machine learning types of tasks, is that these things are often difficult to interpret and we'd like to add...keep the interpretability of regression models while we're exploring interactions.
  So typically our workflow in regression
  so the statistician spends a lot of time building a parsimonious, what I'll call, base model with the necessary variables and no interaction effects added.
  This base model, the statistician will spend many...much time on this base model, many resources, care, spend time thinking about whether or not they're interpretable and if they'll make good contextual sense.
  The problem with these base models that don't include complexity is that they assume that the effects...that the effects that they're estimating are consistent across all possible subgroups like sex, race, age, and that just typically isn't true.
  This lack of complexity
  really prevents people from being able to use it, as compared to say a machine learning model, which does a really good job of modeling this complexity. So the problem can really be summarized in the following way.
  Is there a way that we can not be all the way at traditional multivariable regression models and not all the way to machine learning models,
  but find a nice sweet spot in the middle, where we're able to take the nice interpretability of our regression models that add in a interaction or two
  that we found in this big data, so that we can add interesting nuance to our models and complexity, that, again, add predictive performance as well as good contextual sense.
  So if there is a way of doing that, what we'd really like to do is to develop a tool for statisticians, data scientists, and investigators to be able to explore the interactions after they build a base model.
  the base model happens first, and then the complexity exploration usually would happen second.
  So we have some constraints and preferences if we were to develop a tool like this. We would like there to be, based on traditional statistical models linear logistic regression,
  we'd like them to remain interpretable. We want to check fewer models, if possible, and we'd prefer feasible over optimal. I'll get a little bit more into what that means later, but in essence, really what it means is that
  we would like there to be a plethora of solutions rather than just one single one that are good and what we'll call feasible.
  And we'd also like it to be flexible. It could be adapted to be able to work with linear regression, logistic regression, Cox proportional regression. We could use it for Poisson regression, any type of regression, this framework or tool could be able to be used for.
  And again it's going to be hybrid between traditional statistical methods and machine learning. The results that are going to come out aren't necessarily going to be inferential, but they are going to be exploratory and they'll motivate and influence what we future spend our time doing.
  So let's now talk about this potential solution, the feasible solution algorithm.
  The algorithm...the feasible solution algorithm sometimes called FSA, or that's what I like to call it,
  was first discussed in this paper...really first discussed in detail in this paper that I wrote in 2018.
  And
  I'm going to summarize what the algorithm does here.
  So the goal of the algorithm is to identify interactions of order m (so that would be if m was 2, that would be a two-way interaction; if m was 3, that would be a three-way interaction),
  with a feasible criteria, so that's a criteria that is not necessarily the best criteria, but it's a
  criterion that is
  maybe considered semi-optimal in some way.
  So to do this, we would follow the following steps. So the first thing we're going to do is start off with a random interaction of order m, so for our case, let's assume that to be a two-way interaction, so m is 2.
  We're going to consider all exchanges of one of the variables and the interaction for all the other variables. So for instance, let's say we randomly start it.
  We have five variables, so this is just a small example, just to kind of motivate the steps here, but let's say we have five variables and we randomly start at X3, X5 and our criteria, which is R squared for that random starting place, is .5.
  And let's say the way that the algorithm works is it would consider exchanging one of the variables, X3 or X5, for any of the others. So our choices then, are
  X3 X1, X3 X2, X3 X4, X5 X1, X5 X2, and X5 X4. So notice that of all the possible, what I'll call, swaps have at least one of the starting place variables in them, okay?
  And then, what we're going to do is of all of these, we'll fit those models and we'll figure out what is the criteria for those models. So what we can see here is that
  for the X3 X5 model, we have .5; X3 X1, the criterion is .4.
  And with R squared, we obviously don't want to go to a worse place, we want to go to a better place, a higher R squared, so we would find out of all the possible choices, what's the best place to go to.
  In this example, the best place to go to is swap number three, which is X3 X4. So what we would do is we'd move on to step three and we would make the best exchange from step number two.
  In that case, we would move to place X3, X4 and then we would return to number two and repeat until no improvements can be made.
  So we would repeat this process, moving to places, starting there, considering all the swaps, until eventually we can't make an improvement.
  And we're going to call that in...that place that we end up a feasible solution.
  And we're going to repeat steps one through four to find other feasible solutions. We can do this over and over again, and this process, this feasible solution algorithm, isn't guaranteed to give you the optimal solution, although it can give you the optimal solution, some of the times.
  So let's talk about some of the byproducts of using this algorithm, the feasible solution algorithm. And these are outlined in a paper that
  Elliott, a colleague of mine wrote in 2021, where she describes that feasible solutions are not guaranteed, as I just said earlier, to be optimal for a chosen criterion.
  And that is to say that all optimal solutions are feasible, but not all feasible solutions are optimal. So feasible solutions are a type of semi-optimal solution, they give you a good criteria, feasible criteria, but not necessarily optimal ones.
  These feasible solutions criteria are typically very close to the optimal ones, though, so they tend to be pretty good. They just might not be as good as the best one.
  Many...if you repeat the process, this...these these four steps, you will get potentially many feasible solutions, so if you do this algorithm 10 times, you might get four feasible solutions.
  You might get 10 feasible solutions. It depends on the data that you're using, as well as the variance in covariance of that data set, so it's a little bit
  undetermined walking in as to how many solutions you're going to get.
  And that's why we usually encourage users to repeat the feasible solution algorithm many times, because that's going to increase your chances of getting the optimal ones, as well as the feasible ones, and make sure you've
  adequately searched the space. And we have another paper out that describes that (if anybody's interested, you're welcome to reach out to me) as to how many random sorts should I do to make sure I have a reasonable probability of getting the optimal one. We have a
  theoretical paper about that as well.
  But, in essence, the way that this works is that some of these interactions are more attractive. They...the beta space attracts them more than others, and so what you end up getting some of the times is that, even though
  an interaction is not the optimal one, it can sometimes lead...the data can often lead it
  through the feasible...feasible solution algorithm can oftentimes lead you to that place more often than the optimal one, and that just has to do with how the data are correlated and whatnot.
  So let's talk about the R package and then how I moved that to the JMP add-in.
  So
  the R package is called rFSA, and I know that we might have some people that have used R quite extensively and some that haven't. And
  So R is a really nice programming language and it's often taught in statistics programs, and so it was the first place I immediately went to when I was working on my dissertation
  to, you know, write up this algorithm and to really provide a tool to the community as to being able to identify and explore interactions in their own data sets.
  And so the rFSA package implements this feasible solution algorithm I just talked about
  for interactions in large data sets, and this package supports the optimization of many different criteria like R squared, adjusted R squared, AIC
  interaction, P value, so forth and so on. And it also supports different modeling strategies, like linear models and generalized linear models, and can be easily adapted to work for other types of modeling, things like Cox proportional hazard models.
  And it also gives multiple solutions as it repeats the algorithm as many times as you specify.
  So why...I want to talk a little bit about my motivation about moving this package to JMP, and I'm going to talk about the why, the when, and the how I was going to do this.
  So the first question is, why do this? The first thing is it's fun to do. I like writing in different programming languages. I had not had a lot of experience writing in the JMP scripting language, which is JMP's version of
  statistical programming language, and so I wanted to learn that and I thought it would be fun to move this R package that I spent the greater part of four years on, and try to move it over to JMP, because
  I really like JMP. I think JMP is great. It's a really great tool for me and my data
  analysis pipeline. I usually start off all my projects in JMP and explore the data, I plot the data, I look at things and then that gives me a lot of good intuition about those data. And I found that a lot of my
  colleagues, specifically one of my advisors, he primarily works in JMP and he was always asking me for the FSA package in JMP.
  And so I thought, you know, why not give it to him? He surely deserves it, so i decided that would be a good tool and that hopefully, other people would
  get some use out of it. And that leads me into my other point about why to do this, and I've gotten a lot of great feedback about rFSA.
  Specifically, I've been tracking how many people have downloaded my R package and there's over 16,000 people who've downloaded it since we put it out there in 2019.
  I've gotten countless emails from people all around the world, and I want the same thing to be accessible for people in JMP, and I hope that through this add-in, people are able to find really cool interactions that change how they interpret data and how they understand it.
  And then you might ask, well, when was I going to do this? This isn't exactly something that somebody's paying me to do. You know, JMP's not paying me to do it.
  My current position, while I think that they would find it to be interesting, they're not exactly probably gung ho on me spending a bunch of extra time doing this.
  But luckily, one thing I do have built into my position is some free time on Fridays. I try to leave Friday afternoons or Friday mornings open to just fun,
  fun things that have to do with my job. And I call it Fun Friday Free Time, and that's what I decided to do for the last few months was take my Fun Friday Free Time and spend it building a JMP add-in.
  You might ask, well, how was I going to do this? You know, I didn't know JMP scripting language. How was I going to go about doing this? So
  the first thing is is, you know, I was going to learn it. So there's a lot of really great resources out there about how to learn JSL.
  So there's JMP JSL code support that is within the actual JMP software itself. Just go to help, you can right...go right into
  the scripting index that's there. There's countless things that are online
  on how to understand JMP scripting language and where to get started. And then there's the community.jmp.com,
  which is really great for getting started, where you can ask questions or view other questions that people have asked and borrow the code that they have posted up there publicly for being able...for people to be able to enjoy. And I did that a number of times here, and it really was great.
  So I'm going to kind of go through each one of these a little bit more in depth, just so if you're interested in moving an R package over or writing your own algorithm or writing your own JSL code,
  you'll have an idea as to where to get started. And then, when I'm done with this I'm going to go into my add-in and specifically what it does and how to use it.
  So how do you learn JSL? Well, you can again...
  you can learn it through a number of JMP's JSL resources that they have. So they have a scripting guide that's 864 pages, which is linked here,
  that you can Google, or you might be able to get these slides after this is over, and be able to get these links. There's a scripting index within JMP, which you can just go to help and then scripting indexes. I put here for you guys, really easy to use and get started.
  You can contact JMP. So I first thing I did is I had a contact at JMP. Her name is Ruth Hummel and I said, hey I want to do this, where do I start?
  She gave me support and encouragement about the idea. She thought it was great and then she connected me with a JSL code expert,
  whose name is Mark Bailey. And Mark was tremendous through this whole process. Mark helped me with just general support around the scripting language, reviewing my code, helping me write parts of it,
  get started on part of it. And we took...we went back and forth about 22 different times through email over the last few months. And
  the resources that JMP provided me, as far as direct employees who were willing to help me with my project
  were tremendous. I mean, I couldn't have asked for anything better. I mean, I've never received this type of support when I was trying to create something for any other platform before. So kudos to JMP for
  providing this, and they're the main reason why this exists; it's because they provided fantastic resources.
  So community.jmp.com, you can ask questions there, you can get answers, you can borrow code, you can get certified there in
  JMP scripting language. You can search the Community for anything you want, you can...it's just like a Google, but it's just for JMP.
  And you can search anything you want, so you can type in JSL and JSL whatever you want to do, and it's probably somebody out there that's already posted about that. If there's not, you can add that to the discussion board.
  And then there's borrowing code, and this is one of the things Mark passed on to me was, hey, borrow code. There's a lot of code out there on the Community website,
  and people have shared it for a reason. So I...on the right here is actually something that I borrowed for my add-in that I developed. I wanted users to know where they were
  in terms of running the algorithm and how much longer it was going to take or how much progress had already happened.
  And so I didn't want to write my own progress bar in JMP, so I just borrowed one that was on the Community board and added it straight into my add-in, so yeah.
  This person, Craige Hales, who is a retired staff of JMP, wrote this code and I borrowed it. So thanks, Craige; Thanks, Mark for recommending borrowing the code. It saved me a lot of time, so I really appreciate it and it made the add-in a lot better.
  So now let's talk about the add-in finally. So I've talked to you about why this add-in is needed, right. Others have gotten use use out of the R package and why I think JMP users will benefit from it and I've talked to you about how I did it.
  And now I want to actually show you how it works, which is hopefully the most fun part of this whole thing. So I moved this whole package, R package, over to JMP in a few months on my Fun Friday Free Time that I have, so that's just a few hours on Friday. And the JMP add-in
  currently only works for linear and logistic regression models. I hope to be able to expand it to other models later, but right now, works for linear and logistic regression models. The other thing is that the add-in,
  it doesn't have a lot of the fancy bells and whistles all the other built-in JMP modules have, you know. For instance, when you put a categorical variable into the response variable, the personality type doesn't automatically switch to logistic regression.
  I haven't gotten around to that. There's a lot of other things I haven't gotten around to that I hope to improve with this package as people use it and either like it and give me feedback around it.
  So it does lack some functionality. And the cool thing about the add-in manager is that you can just take your JSL code and there's an add-in...an add-in manager, JMP add in,
  that allows you to create your own add-in from your JSL code, so it's really just a simple couple of button clicks, you can take your JSL code and turn it into an add-in that you can share with the whole JMP community.
  And I've posted this on the community.jmp.com website for everybody to be able to go out there and access that add-in that I'm about ready to show you
  and to access the data set that I'm going to use as well. But this will work with any of your data sets that you have, not just with my example.
  So I'm going to give you a live tutorial really quick of this add-in, called...the add-in, I just called it exploring interactions, and
  this is all going to be done though via the feasible solution algorithm that I talked about earlier. The example is a linear model
  where I'm going to fix two variables, so that would be my base model. My base model will have a continuous response variable and two covariants that I want to be able to adjust for.
  And then, what I would like to do is I'd like to consider second order interactions to add to that model between any of the 10 variables I have in my data set.
  And I'd like to do five random starts, and I want to make sure my criterion, and this currently is the only criterion that's built into the
  JMP add-in, is that we want to minimize the interaction's P value. So each of these interactions that we check
  produce a P value and what I want to do is I want my solutions to be the ones that have
  very small P values. So it's going to search the space, based on what is the interaction's P value and go to the ones that have the best one. So our results are going to have a lot of interactions that have small P values. And
  usually what I'll recommend after you do this type of procedure, the feasible solution algorithm,
  is you follow it up with plotting the data, looking at those interactions in your model, and thinking critically about them in contextual sense, because at the end of the day, we're exploring the data here.
  This is not inferential in any way. We're just using the signal of the data to direct as to what interactions may exist in this data.
  And again, the data and the add-in are posted on the Community website.
  Alright, so I'm gonna stop, get out of this really quick, and I'm going to pull up the
  JMP data set tha I have. Hopefully you guys can see this.
  And as you can see, just a really quick overview, this is actually all a bunch of random data that I generated. There's no real structure to this at all. There's...this isn't real data just randomly generated data in JMP.
  It has 20 observations here. I've got 10 continuous variables, explanatory variables.
  I have a continuous response variable, and I have also categorized the response variable as either being greater than zero or less than zero.
  So if you want to be able to look at...look at the logistic regression results you could do a logistic regression example with the data as well.
  So, once you get the add-in, go to the website, you download the add-in, you just double click on it and it installs directly to JMP. It's really easy.
  And then you can just go here, so you go to add-in, go...once you have your data set open, you go to add-in to explore interactions.
  And this will pop up here. So we have a couple of things. So we'll see all of our variables over here on the left, just like you would with any other module in JMP.
  And so I'm going to pick my response variable, which is going to be Y, and then I'm going to fix, so this is where I'm constructing my base model. So these are all the linear main effects that I'm adding to my model. So in this one, I've got two variables,
  X1 and X2, that I'm going to add here. Now one of the things you need to do is specify the modeling type, so there's two types in my
  add-in you can choose from. One is the standard least squares modeling type and one is the logistic regression one.
  And this isn't going to automatically choose, like I said earlier, based on what you've put in here, so you can totally put in a categorical response variable and it's not going to switch. You have to switch it yourself. That's one of those fancy bells and whistles, hopefully, I'll get to later.
  Get rid of that. Put this back in here.
  Alright, so this is our setup, and so this will be fitting the model y equals beta zero plus beta one X one plus beta two X two. And now down here is where I'm going to put in the variables I'm willing to
  ...or, I want to look for interactions between, so I'm going to select all X1 through X10. I'm going to add those over.
  And then I tell it okay, how many times, I want to run the algorithm? So here I'm going to do five times, for the sake of time,
  and the order the interactions to consider, so I'm only going to do two.
  I am recommend staying three or below, it usually gets a lot harder to interpret interactions that are four-way interactions, five-way, but
  the algorithm...or the way that I've set this up is that you could go that high if you wanted to. I haven't put a limit actually, but just know as the higher you go here, the more time it's going to take for you to run this.
  So I'm going to hit Okay. This is going to take a second.
  So you can see the code that I borrowed.
  You can see it came up here and it tells me I'm 20% of the way done, 40%.
  So these are solutions. So every time that check's going forward, a feasible solution is being found, and then what pops out at the end is a data table, so
  you know, again, this isn't one of those features...I'd like to there to be a selection box, where you could select the interaction and then hit
  create model, and it would do all that for you, just like JMP does already for a lot of different things, but I haven't gotten there yet. This just tells you what the solutions were.
  So it gives you kind of a summary of what it did. So it did five random starts. It tells you what was the response variable, the fixed variables,
  the interaction variables that it found, okay, and the R squared, the adjusted R squared for that model, the root mean squared error and then the interaction P value. Okay, so what we can see here is that
  (if I click on this it'll show me) there's only two solutions, so there's...X2 X3 was one interaction and X10 X7 was another.
  And so now, what I can do, I'm not going to do that now, is I can go and build this model and reproduce these results and look into the results to see, okay, is this a good statistical
  algorithm...or good statistical interaction that I found here? You know, what do my leverage plots look like? What are all my different criterion
  plots look like? And what are my diagnostic plots look like? Those types of things. And then I can begin asking questions, like does this interaction makes sense in terms of the parameter estimates and whatnot?
  It's exploratory, so it's going to give you a potential solutions, but you have to follow that up with the due diligence and actually think about the results you've gotten
  and ask questions to those who are context...contextual experts in this field. That's usually how we use this in the health sciences. We use this with health data, we produce interactions, we then to go to the physicians to say does this makes sense that this effect,
  you know, would not be consistent across age or sex or whatever. And they'll usually talk to us about that, you know, think about it. The whole idea is that this will hopefully influence future studies, where we can power the studies to look for these interaction effects in a more
  well-powered way.
  Alright, so I'm gonna go back now to my PowerPoint. So that was this tutorial. You can see it's pretty easy to use. That was a really small example. You can ramp this up,
  just know, you know, the bigger data sets you have, the longer that whole process that I just did is going to take. It's not going to be 10 seconds long; it's going to be,
  you know, potentially, you know, 5-10 minutes long, depending on how big your data set was. I've done...used the algorithm, used the add in to do data sets with 100 variables,
  explanitory variables, and it took about that amount of time, about about seven minutes, to run it to do about five or 10 random starts.
  Things are faster in the R package than they are in the JMP add-in, but that's because, you know, again, this is my first stab at this and, hopefully, through iterations that will get even faster and better.
  So let's talk about future improvement. So I want to allow users to be able to optimize or to
  find feasible solutions based on any, you know, any criteria they want. So if they want to find feasible solutions based on optimizing R squared or optimizing AIC or optimizing the misclassification rate,
  you know,in logistic regression models, I want to allow them to be able to do that. And it's really not that hard of a thing to do, I just need to go in and do it.
  Automatically save or select modeling personalities based on variable selections. That's just simple. You put in a
  binary logistic variable, you know, it's going to automatically select logistic regression for you. I don't think that's probably super hard, I could probably do that pretty quickly. I feel confident doing that having gone through this process of moving my R package over.
  I want to improve the speed some. There's a lot of things I could do to improve the speed of this.
  So, I'd like to be able to do that. A recall button would be nice. I don't have that now. That's one of my favorite features in JMP is being able to click that recall button.
  And then I'd like to be able to streamline going from the results that you got from FSA to building a model. So
  you can do that already, like when you do forward selection in JMP or Lasso or any of those things, you can just click make model and it brings all the variables over super easily.
  I'd like to be able to do that with this feasible solution algorithm, just get those types of results and just click the make model and it automatically does everything for you
  so that you can explore them, and, again, hopefully, you make good sense of those interactions that you found.
  So some acknowledgments that I'd like to go through. So first is Mark Bailey. I couldn't have done it without you. Thank you so much for your support with the JSL code. Thank you to Ruth
  for just getting me in contact with Mark, as well as just being a general, you know,
  support person for this project and just providing good feedback. Anne and Larry, I just appreciate
  your help with all things that had to do with JMP and with our past relationships and for encouraging me to do this Discovery Summit. It's been really great and I'm excited to meet more people in the JMP community
  through it, so thank you for that. You can get in touch with me in a number of ways. You can send me an email at my university email, you can contact me on Twitter if that's the way you like to do things.
  And I also am planning on posting this JMP add-in up on github, as well as the Community website, so please feel free to check either of those places for future updates of the add-in.
  So yeah, thank you for having me and I'm excited to take your questions during our allotted time, thank you.