Our team has been developing discrete-event simulation models of process and packaging systems for several decades. In 2024, we adopted JMP to streamline and strengthen how we interpret statistical results from these models.
Since simulation models can be time-consuming and expensive to run, we now use JMP to build surrogate metamodels, based on designed experiments that dramatically improve the efficiency of extracting insights from simulations. The metamodels yield deeper insight into system behavior than traditional outputs, such as 3D visuals or detailed run statistics provided during a single model run.
Our first metamodel revealed a clear solution to a long-running analytical challenge: With multiple buffers on a packaging line, how big should each be to optimize the line performance given limited capital resources? We present several other metamodels, including comparing several layout options for a line with a merge-divert mechanism and the nonlinear effect on performance based on where it was located.
Finally we present a case study of a forklift simulation used to support an investment decision for whether to construct a direct connection between two warehouses. Surrogate modeling has become a vital part of our simulation workflow, and JMP has played a key role in making this approach more accessible, efficient, and impactful for both for our team and for our stakeholders

Hi, I'm Shannon Browning. I'm a Technical Lead for System Analytics at the Haskell Company. We're new JMP users since 2023, but we have a long history with discrete event simulation. Discrete event simulation is a common tool in manufacturing systems and service systems, allowing us in a virtual environment to try new control strategies, new layouts, different flow, logic, et cetera, before we actually implement it in the field.
Since adopting JMP, we've had a great success using what we call surrogate models or meta-models, models of models to provide an extra layer of insight. So we do a design of experiments around the simulation model itself, systematically varying inputs to understand, to be able to observe system behaviors at this higher level.
There's some different challenges we run into here versus design of experiments with more of a physical or destructive type of experiment. The computer time is really the only constraint. We're not worried about material constraints or many of the restrictions on feasibility that drive you to maybe do fewer replications or scenarios in a traditional DOE, which really unlocks a lot of opportunity, but also some interesting challenges. A lot of discrete event simulations tend to be stochastic in nature, meaning that we've got random variations generated in the model.
We can use a technique called Common Random Numbers to where each different setup is run several times, but each of those several times is run with the same random seeds. We get similar enough behaviors that we can treat those as random effects, say, in a mixed effects model. But I think what will be really helpful is just diving in on some of these and taking a look at some different applications.
There's three different use cases that we've found compelling as we've gone through. One of them is long-standing problem of identifying where to put buffers on a highly automated high-speed packaging line. The second one we'll talk about, it was a little bit more in the details about the controls and the physical layout of the conveyance. And then the third one, we're taking a look at forklift traffic in a facility and trying to understand whether a significant investment in a layout change would provide payback or results.
I'll dive in first on the bottling line simulation. In this one, taking a look on the bottom left, we're representing the system more at a flowchart level. We're just in the software tool here, dragging out the different processors and cues and arranging them. Each of these will have random downtime downtimes, and between those downtimes, they're going to run at fixed rates. There's a little bit of over speed, and so that allows us to see the buffer can disconnect these in time. So a stop on one processor doesn't immediately stop the one upstream. And the more buffer that we provide, the more installation that you get. But there's some diminishing returns. There's some nonlinear responses in this.
Our bottom right shows what the queue dynamic might look like over time, where when something stops, we accumulate a lot very quickly. Then, when we recover, there's a speed differential so that buffer draws down over time. We've got a couple of variables to work with here. The machine reliabilities, the over speeds, the buffer sizes, etc. One of our challenges is with so many variables, which do we work with? Depending on where we are in the decision process, tends to be where we want to be or dictates what variables we can adjust. By the time we came here, we'd already had a lot of machine selection for it. Really, the only thing left was to look at these buffers.
Traditionally, you'll look at those in isolation. You would say, Well, let me put one buffer here, and then just vary one factor at a time. But what we were able to do with JMP, we set up each of these buffers to be from zero to four minutes. We did 10 replications each of every combination. So there were nine levels for each of them, which allowed us to get the curvature of this response dialed in pretty well. Because we're in this computer simulation world, that meant there was 729 different combinations.
The model runs pretty quick, but still doing that for 10 replications each. We ran this model over 7,000 times, and it took half a day or more to run through all of those scenarios. But once we generated that data, then we're able to take a look at it in JMP. We're showing the prediction profiler on the top right here. Just starting with the baseline of all of these buffers at zero, we can clearly see I get the best response from this position to buffer, a little bit less good of a response here in three, and really not much at all in two. But we also modeled the interaction effects. We can see in this prediction profiler, as we choose, say, two and a half minutes on buffer two, and then two minutes on the buffer three, the curve on one even gets flatter because you can't say, I'm going to take the full benefit of all three of these at once. There's an interaction of if I put a bunch of buffer on two, then three is less effective and so forth.
This was really powerful insight here. I think the original project plan had about a half a million dollar investment for each of these three buffers, and we were able to at least say definitively that the buffer in one wasn't required. And so we can pull at least a half a million in capital out there. And then the buffer in the third position, we could fine-tune it. So not making the full investment in that particular buffer. So really good insights for the project there.
Coming back, the next one we'll take a look at is a little bit more of the details of how the system actually works. So In this case, there was an existing line in place, and remembering from the last one that we looked at how that overspeed difference between two unit ops allows you to draw that buffer down. They'd made improvements in the bottleneck of the system to make it run about 10% faster. Then that meant that the overspeed differential was quite a bit lower. That caused issues in how the conveyor system itself operated.
We had two lanes of product coming in, and then also an offline hopper that was refeeding product, and the merge-divert complex where all of that was coming together, our automation engineers on site noted that there was a challenge closing some gaps based on the distance there. That challenge presented itself as the bottleneck increased in rate. Using simulation, we said, we're working to take a look at what if we move this 12 feet upstream? What if we were to move it 24 feet upstream?
Obviously, a significant investment to cut the steel, rearrange all of the conveyor, and get everything dialed in. It was a little bit more challenging from a discrete event modeling perspective for us as well, because we had to recreate the layout three different ways. We had the original layout and then these two different iterations on it. Where we made the layout changes, fine-tuned the controls, and then took a look at that. So where I said earlier that we could run a bunch of different iterations with this one being a little bit more detailed, there was a little bit more cost to each scenario of making these model adjustments.
One thing that we didn't quite expect until we plugged the numbers in to JMP was there was a constraint upstream. We knew downstream that we didn't really have enough space between the next machine and this merge-divert complex. But we found as we moved it up to 12 feet upstream and then 24 feet upstream, is we were getting too close to that upstream piece. And so we'll plug this into JMP. We use the fit model platform. We use the quadratic fit on this one. And we were able to see that we would start to see performance degrade as we get closer and closer upstream.
That was a bit of a surprise to us that we may not have noticed without plugging this in. I don't have it on the screen, but we're able to get the equation of this fit out of here and then take the derivative and find the maximum point here. But the challenge, of course, is what's the uncertainty around this point, somewhere between 12 feet and 24 feet.
One approach, of course, it could have just made a model at 18 feet near where we see this optimal point and then refit to that data. But we also in the prediction profiler can see uncertainties around the first and second derivative of that quadratic fit. That can really help us maybe understand. We know it's between 12 and 24 feet where that max tends to lie, but it's not clear exactly where. There's other variables that will confound this. There's practicalities of how you can actually build the system. But this was really helpful for us to see just a little bit more insight.
Then the final one we'll take a look at is about forklift traffic. This facility, they were ramping up production. They weren't there yet, so we really couldn't observe empirically what was going to happen. But we built a flowchart model of the packaging lines and used that as a heartbeat for this model. Then we programmed the forklift dispatch rules and their tasks and all of the things that the forklifts were going to do to supply material to the lines and take them away from the lines.
One thing that we found here was needing to treat these forklifts as categorical variables. When we first went to fit this model, we found that there was a nonlinear relationship here. We didn't know it for sure until we treated this as a categorical variable. Then we're able to see that without this connector, when you get to five active forklifts, in here, you really don't get any more benefit out of the six. Then we can also see that here I am on six, in this prediction profile, if I were to say, add this connector, which was this layout innovation for this particular situation, if we were to add that connector, there really is a big potential gain here. That's what we're able to use as a basis to say this is probably worth the investment. It was a significant investment to that layout change.
We also get the ability to take a look at these interaction profilers. You can clearly see on the interaction profile here, with and without this connector that I'm really constraining it if we don't have that connector. There's really nothing operationally that they would have been able to do if they don't invest in de-bottleneck or in or decongesting the traffic on this.
It's a quick survey of some of the ways that we've been applying JMP to the simulation models that we've already been building. I like to say that these meta-models have helped our simulation analysts see the forest for the trees. You can get lost in the weeds sometimes with these discrete event models because there's so many different dials to turn, and it can create so much data that can be hard to maybe connect back to the actual business problem or item that we're trying to solve. I appreciate your time, and please let us know if you have any questions.
Presenter
Skill level
- Beginner
- Intermediate
- Advanced