Choose Language Hide Translation Bar

JMP BEAST Mode: Boundary Exploration through Adaptive Sampling Techniques (2020-US-30MP-562)

Level: Intermediate

 

James Wisnowski, Principal Consultant, Adsurgo
Andrew Karl, Senior Statistical Consultant, Adsurgo

Darryl Ahner, Director OSD Scientific Test and Analysis Techniques Center of Excellence

 

Testing complex autonomous systems such as auto-navigation capabilities on cars typically involves a simulation-based test approach with a large number of factors and responses. Test designs for these models are often space-filling and require near real-time augmentation with new runs. The challenging responses may have rapid differences in performance with very minor input factor changes. These performance boundaries provide the most critical system information. This creates the need for a design generation process that can identify these boundaries and target them with additional runs. JMP has many options to augment DOEs conducted by sequential assembly where testers must balance experiment objectives, statistical principles, and resources in order to populate these additional runs. We propose a new augmentation method that disproportionately adds samples at the large gradient performance boundaries using a combination of platforms to include Predictor Screening, K Nearest Neighbors, Cluster Analysis, Neural Networks, Bootstrap Forests, and Fast Flexible Filling designs. We will demonstrate the Boundary Explorer add-in tool with an autonomous system use-case involving both continuous and categorical responses. We provide an improved “gap filling” design that builds on the concept behind the Augment “space filling” option to fill empty spaces in an existing design.

 

 

Auto-generated transcript...

 

Speaker

Transcript

James Wisnowski Welcome team discovery. Andrew Carl and Darryl Ahner and I would like to and are excited to present two new sampling, adaptive sampling techniques and
Really going to provide some practitioners some wonderful usefulness in terms of augmenting design of experiments. And what I want to do is I want to kind of go through a couple of our
Processes here on I've been talking about how this all came about. But when we think about DOE and augmenting designs, there is a robust capability already in JMP.
So what we have found though working with some very large scale simulation studies is that that we're missing a piece here
gap filling designs and adaptive sampling design. And the
the key point is going to be the adaptive sampling designs are going to be focusing on the response.
So this is kind of quite different from when you think of maybe a standard design where you augment and you look at the design space and look at the X matrix. So now we're going to actually take into account the targets or the responses. So this will actually end up
providing a whole new capability so that we can test additional samples where the excitement is. So we want to be in a high gradient region so much like you might think in response surface methodology as deep as the ascent.
Now we're going to automate that in terms of being able to do this with many variables and thousands of of runs in the simulation.
The good news is that this does scale down quite nicely for the practitioner with the small designs as well.
And I'll do a quick run through of our of our add in that we're going to show you, and then Andrew will talk a little bit about the technical details of this.
So one thing I do want to apologize, this is going to be fairly PowerPoint centric rather than JMP add in
for two reasons...I should say, rather than JMP demo...for two reasons, primarily because our
time, we've got a lot of material to get through, but also our JMP utilization is really in the algorithm that we're making in this adaptive sampling. So
ultimately, the point and click of JMP is a very simple user interface that we've developed, but what's behind the scenes
and the algorithm, it's really the power of JMP here, so. So real quick, the gap filling design, pretty clear. We can see there's some gaps here, maybe this is a bit of an exaggeration puts demonstrative of technique,
though in reality we may have the very large number of factors with that curse of dimensionality can come into play and you have these holes in your design.
And you can see, we could augment it with this a space filling design, which is kind of the work horse
in the augmentation for a lot of our work, particularly in stimulation calling and it doesn't look too bad. If we look at those blue points which are the new
points, the points that we've added, it doesn't look too bad. And then if you start looking maybe a little closer, you can kind of see though, we started replicating a lot of the ones
that we've already done and maybe we didn't fill in those holes as much as we thought, particularly when we take off the blue coloring and we can see that
there's still a fair amount of gaps in there. So we, as we're developing adaptive sampling, recognize one piece of that is we needed to fill in some holes
in a lot of these designs. And we came up with an algorithm in our tool, our add in, called boundary explorer that will just do this particular...
for any design, it will do this particular function to fill in the holes and you can see where that might have a lot of utility in many different applications.
So in this particular slide or graph, we can see that those blue points are now maybe more concentrated for the holes and there are some that are dispersed throughout the rest of the region. But even when we go to the...
you can color that looks a lot more uniform across, we have filled that space very well.
Now that was more of a utility that we needed for our overall goal here, which was an active sampling. And the primary application that we found for this
was autonomous systems, which have gotten a lot of buzz and a lot of production and test, particularly in the Department of Defense.
So in autonomous systems, you may think of there's really two major items when you think of it. In autonomous systems really what you're looking at is, is you really need
some sensor to kind of let the system know where it is. And then the algorithm or software to react to that. So it's kind of the sensor-
algorithm software integration that we're primarily focused on. And what that then drives is a very complex simulation model that honestly needs to be
run many, many thousands of times. But more importantly, what we have found is in these autonomous systems, there's there's these boundaries that we have in performance.
So for example, we have a leader-follower example from the from the army. That's where a soldier would drive a very heavily armored
truck in a convoy and then the rest of the convoy would be autonomous, they would not have soldiers in them.
Or think of maybe the latest Tesla, the pickup truck, where you have auto nav, right? So the idea is we are looking for testing these systems and we have to end up doing a lot of testing. And what happens is
for example, maybe even in this Tesla, that you could be at 30 miles an hour, you may be fine and avoiding an obstacle. But at 30.1 you would have to do an evasive maneuver that's out of the algorithm specifications. So
that's what we talk about when we say these boundaries are out there. They're very steep
changes in the response, very high gradient regions. And that's where we want to focus our attention.
We're not as interested as where it's kind of a flat surface, it's really where the interesting part is, that's where we would like to do it.
And honestly, what we found is, the more we iterate over this, the better our solution becomes. We completely
recommend do this as an iterative process. So hence, that's the adaptive piece of this is, do your testing and then generate some new good points and then see what those responses are and then adapt your next set of runs to them.
So that's our adaptive sampling.
Kind of the idea of this really, the genesis, came from some work that we did with applied physics labs at Johns Hopkins University. They are doing some really nice work with the military and
while reviewing it in one of their journal articles, I was thinking to myself, you know, this is fantastic in terms of what you're doing, and we could even use JMP to maybe come up with a solution that would be more accessible to many of the practitioners. Because the problem with
Johns Hopkins is is that it's very specific and it's somewhat...to integrate, it's not something that's very
accessible to the smaller test teams. So we want to give...put this in the hands of folks that can use it right away.
So this paper from the Journal of Systems and Software, this is kind of the source of our boundary explorer. And as it turns out, we used a lot of the good ideas but we were able to come up with different approaches and
and other methods. In particular, using native capability in JMP Pro as well as some development, like the gap filling design that we did along the way.
Now,
In terms of this example problem, probably best I'll just go and kind of explain it
right in a demo here. So if I look at a graph here, I can see that...I'll use this...I'll just go back to the Tesla example. So let's say I'm doing an auto navigation type activity and I have two input factors and let's say maybe
we have speed and density of traffic. So we're thinking about this Tesla algorithm. It wants to merge to the lane to the left so it wants to, I should say, you know, pass. So it has to merge. So one of them would be the speed the Tesla is going and then the other might be the density of traffic.
And then maybe down in this area here we have a lower number. So we can think of these numbers two to 10, we could maybe even think of the responses, maybe even like a probability of a collision.
So down at low speed/low density, we have a very low probability of of collision, but up here at the high speed/
high density, then you have a very high probablity. But the point is it what I have highlighted and selected here, you can see that there's very steep
differences in performance along the boundary region. So it would, as we do the simulation to start doing more and more software test for the algorithm,
we'll note that it really doesn't do us a lot of good to get more points down here. We know that we do well in low density and low speed. What we want to do is really work on the area in the boundaries here. So that's our problem,
how can I generate 200 new points that are really going to be following my boundary conditions here.
Now, here what I've done is I have really, it's X1 and X2, again, think of the speed and...
our speed as well as the density. And then I just threw in a predictor variable here that doesn't mean anything. And then there's, there's our response. So to do this, all I have to do is come into boundary explorer and under adaptive sampling,
my two responses (and you can have as many responses as you need) and then here are my three input factors. And then I have a few settings here, whether or not I want to target the global minimum and max, show the boundry.
And we also ultimately are going to show you that you have some control here. So what happens is in this algorithm is we're really looking for, what are the nearest neighbors doing? If all the nearest neighbors have the same response, as in zero probability of having
an accident, that's not a very interesting place. I want to see where there's big differences. And that's where that nearest neighbor
comes into play. So I'll go ahead and and run this. And what we're seeing on there is we can see right now that the algorithm, it used JMP's native capability for the prediction screening and
fortunately, is not using the normal distribution. You can see it's running the bootstrap forest. Andrew is going to talk about where that was used.
And ultimately
what we're going to do here, is we're going to generate a whole set of new points that should hopefully fall along the boundary. So that took, you know, 30 seconds or so to do these these points and from here I can just go ahead and pull up
my new points. So you can see my new points are sort of along those boundaries, probably easiest seen if I go ahead and put in the other
ones.
So right here, maybe I'll
switch the color here real quick.
And I'll go ahead and show maybe the midpoint in the perturbation.
So right now we can kind of see where all the new points are. So, the ones that are kind of shaded, those are the ones that were original and now we're kind of seeing all of my new points that have been generated in that boundry.
So of course the question is, how, how did we do that?
So what I'll do is I'll head back to my presentation.
And from there, I'll kind of turn it over to Andrew, where he'll give a little bit more technical detail in terms of how we go about finding these boundry points because it's not as simple as we thought.
Andrew Karl Okay. Thanks, Jim. I'm going to start out by talking about the the gap filling real quick because we've also put this in addition to being integrated into the overall beast tool.
It's a standalone tool as well. So it's got a rather simple interface where we select the columns that we define the space that we went to fill in.
And for continuous factors, it reads in the coding column property to get the high and low values and it can also take nominal factors as well.
In addition, if you have generated this from custom design or space filling design and you have disallowed combinations, it will read in the disallowed combination script
and only do gap filling within the allowed space. So the user specifies their columns, as well as the number of new runs they want.
And let me show a real quick example in a higher dimensional space. This is a case of three dimensions. We've got a cube where we took out a hollow cylinder and we went through the process of adding these gap filling runs, and we'll turn them on together to see how they fit together.
And then also turn off the color and to see what happens.
So this is nice because in the higher dimensional space, we can fill in these gaps that we couldn't even necessarily see in the by variate plots. So how do we do this?
So what we do is, we take the original points, which in this case is colored red now instead of black and we can see where those two gaps were,
and we overlay a candidate set of runs from a space filling design for the entire space.
We add for the concatonated data tables of the old and the new candidate runs, we have an indicator column, continuous indicator column, we label the old points 1 and the label the candidate point 0. And in this concatenated space, we now fit a 10 nearest neighbor
model to the to the indicator column and we save the predictions from this. So the candidate runs with the smallest predictions, in this case, blue,
are the gap points that we want to add into the design. Now, if we do this in a single pass, what it tends to do is overemphasize the largest gaps. So we do is we actually do this in a
tenfold process, where we will take a tenth of our new points, select them as we see here, and then we will add those in and then rerun our k-nearest neighbor algorithm to pick out some new points and to fill out all the spaces more uniformly.
So that's just one option...the gap filling is one option available within boundary explorer.
So Jim showed that we can use any number of responses, any number of factors and we can have both continuous and nominal responses and continuous and nominal factors.
The fact...the continuous factors that go in, we are going to normalize those behind the scenes to 01 to put them on a more equal footing.
And for the individual responses that go into this, we are going to loop individually over each response to find the boundaries for each of the responses within the factor space.
And then at the end, we have a multivariate tool using a random forest that considers all of the responses at once.
And so we'll see how each of the different options available here in the GUI, in the user interface, comes up within the algorithm.
So after after normalization for any of these continuous columns, the first step is predictor screening for all the both continuous and nominal responses.
And this is to do is to find out the predictors, they're relevant for each particular response.
And we have a default setting in the user interface of .05 for proportion of variants explained, or portion of contribution from each variable. So in this case, we see that X1 and X2 are retained for response Y1, and X3 noise is rejected.
The next step is to run a nearest neighbor algorithm. And we use the default to 5, but that's an option that the user can toggle.
And we aren't so concerned with how well this predicts as we are to just simply use this as a method
to get to the five nearest neighbors. What are the rows of the five neighbors neighbors and how far are they? What is the distances from the current row?
And we're going to use this information of the nearest neighbors to identify each point, the probability of each point being a boundary point.
We have to use split here and do a different method for continuous or nominal responses. For the nominal responses, what we do is we concatenate the response from the current column
along with the responses from the five nearest neighbors in order, in this concatenate concatenate neighbors column.
And we have a simple heuristic we use to identify the boundary probability based on that concat neighbors column.
If all the responses are the same, we say it's low probability of being a boundary point.
If, at least one of the responses is different, then we say it's got a medium probability of being a boundary, excuse me, a boundary point.
And if two or more of the responses are different, it's got a high probability of being a boundary point. We also record the row used. In this case, that is the the boundary pair. So that is the closest neighbor that has a response that is different from the current row.
We can plot those boundary probabilities in our original space filling design.
So as Jim mentioned early on, we have a...we initially run a space filling design before running this boundary explore tool to get...to explore the space and to get some responses.
And now we fit that in and we've calculated the boundary probability for for these. And we can see that our boundary probabilities are matching up with the actual boundaries.
For continuous responses we take the continuous response from the five nearest neighbors, and add a column for each of those, and we take the standard deviation of those.
The ones with the largest standard deviations of neighbors are the points that lie in the steepest gradient areas and those are more likely to be our boundary points.
We also multiply the standard deviation by the mean distance in order to get our information metric,
because what that does is for two points that have an equal standard deviation of neighbors, it will upweight the one that is in a more sparse region with fewer points that are there already.
So now we've got this continuous information metric and we have to figure out how to split that up into high, medium, and low probabilities for each point.
So what we do is we fit in distribution. We fit in normal three mixture and we use the mean as the largest distribution as the cutoff for the high probability points.
And we use the intersection of the densities of the largest and the second largest normal distributions as the cutoff for the medium probability points.
So once we've identified those those cut offs, we apply that to form our boundary probability column.
And we also retain the row used, which is the closest. In this case for the continuous responses, that is the neighbor that has the response that's the most different in absolute value from the current role.
So now for both continuous and nominal responses we have the same output. We have the boundary probability and the row used. Now that we've identified the boundary points,
we need to be able to use that to generate new points along the boundary.
So the first and, in some ways, the best method
for targeting and zooming in on the boundary is what we call the midpoint method. And what we do for each boundary pair, each row and its row use, its nearest neighbor
identified previously...I'm sorry, so not nearest neighbor but neighbor that is most relevant either in terms of difference in response nominal or most different in terms of continuous response.
For the continuous factors we take the average of the coordinates for each of those two points to form the mid point. And that's what you see in the graph here. So we would put a point
at the red circle. For nominal factors, what we do is for the boundary pairs is we take the levels of that factor
that are present in each of the two points and we randomly pick one of them. The nice thing about that is if they're both the same, then that means the midpoint is also going to be the same level for that nominal factor for those two points.
A second method we call the perturbation method is to simply add a random deviation to each of the identified boundary points.
So for the high boundary...high probability points, we add two such perturbation points for the medium, we add one.
And for that one, we add, for the continuous factors, we add a random deviation. Normal means 0; standard deviation, .075 in the normalized space, and that .075 is something that you can scale
within the user interface to either increase or reduce the amount of spread around the boundary.
And then for nominal factors, what we do is we take...we randomly pick out a level of each the nominal factors. Now for the high probability...
high probability boundary points that get a second perturbation point, what we do is in the second one we restrict those nominal factor settings to all be equal to that of the original point.
So we do this process of identifying the boundary and creating the mid points and perturbation points for each of the responses specified in the boundry explorer.
Once we do that, we concatenate everything together and then we are going to look at all the mid points identified for all the responses,
and now use a multivariate technique to generate any additional runs. Because the user can specify how many runs they want and these midpoint and perturbation methods only generate a fixed number of runs and depending on the
the lay of the land, I guess you could say, for the data.
So what we do is something similar to the gap filling design where we take all of the identified perturbation and mid points for all of the responses
and we fill the entire space with the space filling design of candidate points. We labeled the candidate points 0 in a continuous indicator, the mid points 1, and the perturbation points .01.
We fit a random forest to this indicator. And then we take a look. We save the predictions for the candidate space fill in points
and then we take the the candidate runs with the largest predictive values of this boundary indicator.
And those are the ones that we add in using this random forest method. Now since this is a multivariate method, if you have a area of your design space that is a boundary for multiple responses, that will receive extra emphasis and extra runs.
So here's showing the three types of points together.
Now, again, to emphasize what Jim said, this needs to happen in multiple iterations, so we would collect this information from our boundary explorer tool
and then concatenate it back into the original data set. And then after we have those responses, rerun boundary explorer and it's going to
continuously over the iterations, zoom in on the boundaries and impacts, possibly even find more boundaries. So the perturbation points are most useful
for early iterations when you're still exploring space, they're more spread out, and the random forest method is better for later iterations, because
it will have more mid points available because it uses not only the ones from the current iteration, but also the previously recorded ones. We have a column in the data table that records the type of point that was added. So we'll use all the previous mid points as well.
So if we put our surface plot for this example we've been looking at for a step function, we can see our new points and mid points and perturbation points are all falling along the cliffs, which are the boundaries, which is what we wanted to see.
So the last two options for the user interface or to indicate those gap filling runs and we can also ask it to target global min max or match target for any continuous factors, if that's set as a column property.
Just to show one final example here, we have this example where we have these two canyons through a plane with a kind of a deep well at the intersection of these.
And we've run the initial space filling points, which are the points that are shown to get an idea of the response. And if we run two iterations of our boundary explorer tool,
this is where all the new points are placed and we can see the gaps in kind of in the middle of these two lines. What are those gaps?
If we take a look at the surface plot, those gaps are the canyon floors, where it's not actually steep down there. So it's flat, even locally over a little region,
but all of these points, all of these mid points,
have been placed not on the planes, but the on the steep cliffs, which is where we wanted. And here we're toggling the minimum points on and off and you can see those are hitting the bottom of the well there. So we were able to target the minimum as well.
So our tool presents two distinct, two discrete
options, a new tools. We want the gap filling that can be used on any data table that has coding properties set for the continuous factors.
And then the boundary explorer tool that can be used to add, do runs that don't look at the factor space by itself, but they look at the response in order to target the high gradient...high change areas to add additional runs.

 

Comments
P_Anderson

Nice presentation Jim and Andrew.  You shared examples in 2D and 3D that were easier to visualize, but I was wondering how many dimensions/variables you have used this tool in exploring?