We don’t live in a static world. Dynamic visualization and visual management are essential elements of Lean Six Sigma; they link data and problem solving. As with detective work, it is important to be able to spot clues and patterns of behavior in a situation. Establishing a visual environment enables rapid processing of large data sets, which leads to quick detection of trends and outliers. The goal of Lean is elimination of waste. Waste is present in many forms, such as waiting for information, moving data to multiple sources, and over-processing data. Data visualization allows for reduction of these waste streams.

This presentation provides a real-life case study where JMP is utilized to help “move the data to a story” in a visual way that aids in communicating information, eliminating waste, and driving continuous improvement. This case highlights the use of JMP tools, such as Excel import, Query Builder, Graph Builder, data filters, control charts, basic modeling, reporting, and dashboards. The presentation also explains how visual management helped engage and empower employees throughout the organization.

Hi, everyone.

Thank you for joining us for our presentation

of From Data to Story:

Using visualization to drive continuous improvement.

My name is Allison Bankaitis, and my co- presenter is Scott W ise.

A little bit about myself.

I currently supervise a small team of process engineers

at Coherent Incorporated,

but I'm still very involved in the daily process engineering efforts.

Previously, I held various process engineering roles

at Corney Incorporated, and I'm very excited to show a case study

of how we view some of these JMP tools in our process engineering work.

A little bit for Scott.

Thank you, Allison.

I'm Scott Wise.

I'm from JMP in support Allison's JMP usage,

as well as other customers in the Northern California area.

And I'm just real excited to be a part of this really cool case study.

Hopefully, you'll pick up a lot of best practices

and tips from some of the things that helped us.

All right.

I placed the abstract here for future reference,

but just wanted to highlight a few things.

Coherent has placed a recent focus on Lean,

which aims to eliminate waste.

Tools from JMP have aided data visualization,

which in turn has enabled reduction of waste.

Another advantage of these tools is the ability to engage

and empower employees throughout the organization.

These areas will be the focus of this presentation.

Our first section is about eliminating waste in the data collection process.

In this case study,

we had a data collection process with unnecessary complexity.

It used to take 20 minutes to process one part.

So to do this, we had built a data query and access.

This is just a screenshot here showing an example of a few data tables

where we combined variables from various tables

to get the output that we are looking for.

And then we used a macro to pull data for an individual part into Excel.

This is again,

just a screenshot of an example database connection in Excel

and the code that we would write in Excel.

This was, again, done for each individual part.

From that data, then we could attribute data in Excel.

We could then calculate average values of each attribute.

We would then pull additional data from our MES website,

such as part type or other items listed here.

Then all that data was copied into an Excel summary log.

So we maintain the log,

but we weren't really doing anything to track or analyze the data.

With JMP, I was able to streamline the process.

I built this framework in about an hour

and reduced process time to five minutes per part.

And in this case,

this is just for one engineer myself, on one product that I've worked on.

But if we can extend this to multiple products

and multiple engineers,

we could really gain a lot large savings of time.

So to do this, I built a data query in JMP,

which included both the attribute and MES data in one location.

Just a screenshot of there'd be several tables here pulling in the data,

the different variables here,

we can do some initial filtering in the data query.

So in this case I selected a time frame that I wanted to focus on

and then a subset of variable that I down selected,

so I don't manage both data set.

And then I can build the data table here

and always clean up more of the data later on as necessary.

After I had the data,

I replicated some of the charts that we already had in Excel,

just made them very similar so that people could see

what they're used to dealing with for the time being.

After that, again,

I built the summary table to replicate what they were used to seeing.

I calculated the average attribute data and merged the original table

and the tabulated table into one summary table

so that they could have an output what they are used to seeing.

The next thing is to take this data

and move from data to story.

So to do this, the first thing I was curious to know

was what does the data tell us about current performance.

So I plotted the data over time as my first aspect

and I will show that to you in JMP.

So just using this graph builder

and the timestamp that I chose

and then the main output that I started looking at,

added that to the chart here.

What I used to do is manually go in here

and add reference lines using this field here.

But what Scott showed me, which is really neat

and then extends to all of the graphs,

is that you can add it directly to the data table,

you can add the spec limits.

You just go into the Variable of Interest, Column Properties

and go down to Spec Limits and add the values in here.

This is checked so that you can see the graph reference lines on each graph.

So once I had that output,

I could see that there is a large amount of variation in the data

and many of the values were outside of the spec limit.

So the next thing I wanted to do was compare additional variables.

So to do that pretty quickly, I was able to just add

the column switcher to this graph I already had

by going here and selecting the variable I wanted to change

along with the other variables that I trust.

Then from here I can quickly click through all these variables

and see the variation in each one.

Next for me, I have some process knowledge

and I'm sure you would have process knowledge of your situation as well.

Based on this process knowledge, I was able to select a variable

that I thought might be driving some of the variation.

In my case,

I thought that X3 might be responsible

for driving some of these trends that I was seeing.

I put that into our graph here.

The other piece of process knowledge that I have

is that our spec is based on the average value for each part.

I changed this to mean and then I was curious to see

the line of fit over time, so I put that and added that here as well.

The next thing I was interested in seeing

is a little bit more about performance by looking at a control chart.

To see the control chart, analyze quality and Process Control chart builder,

and I was curious to see it against X2 , which is a part number

and that same variable that I've been looking at.

A gain, I was going to split it out by X3.

From here, we can see there's a shift in the average

based on which subset of X3 .

A lso the thing that was obvious to me is that the sample sizes were uneven.

To me, knowing the process,

I know they should each have 10 collections of data for each part.

So based on our process, I said,

well, to get an initial look at the performance,

I'm going to limit to only parts that have 10 measurements per.

To do that, we made a new data table,

cleaned up the data again, and once I had this,

I recreated the control chart

with just a small change.

Then here I added a local data filter

to have X3 split out on two separate graphs.

That was my learning.

Now I can see these upper and lower control limits

and this process capability chart,

since now I have the even subgroup sample size of tech.

That's where I will hand it over to Scott.

Thank you very much.

All right.

I'm going to pick up with the rest of the story.

Allison has done a great job of understanding

where the current performance of her process was.

But we also thought there might be some other key variables within her data

that could be useful for explaining these differences we're seeing in the output.

One of the things that we tried was actually a modeling tool

that's very simple and often used to screen for important variables.

It's called a partition.

In this partition,

all you have to do is of course,

you're going to pull up your data

and then it is under predictive modeling.

People call this a decision tree,

and I'll show you why when we start to fill it out.

But all you got to do right now is give it an output that's our 21 there,

X 21,

and get it the inputs we want.

I'm going to put all the inputs in except for X2 ,

which was the kind of a part ID.

I'm going to remove that one.

There was another one that Allison recommended I removed,

given her process knowledge, and that was X8.

But we'll leave all the others in.

When I say Okay,

it brings up the start of a decision tree.

What it's doing is saying, I can make a bunch of splits

and I'm going to look at all the inputs and I'm going to try to find a cut point.

We'll breaking basically any of those variables into two groups.

Will that give any explanatory value

toward the differences I'm seeing in the output?

In this case, X 21.

So if you make the first split, it's saying that I've explained 27%

of all the difference you're seeing in the output

via just splitting X 19 at 500.

If it's greater or equal to 500,

I'm going to have a much lower mean of 12. 67.

If it's less than 500, watch out, it jumps up to 13.

This is really cool to find other things I might want to split,

break, view on my graphs.

You can continue splitting and it will look at other variables

like X3 came into place here

and Allison already knew

that was going to be an important variable.

A s you keep splitting,

you can see it starts to add in terms of the predictability.

This RSquare, the closer to one, the more predictable.

So it's like 56% predictability here.

I've gone ahead and done that, I'll show you what that view looks like.

Here's the finished view I came up with.

I've got these nice big column contribution bars here at the bottom.

You can see that X 19 got split.

Actually found five cut points for X 19,

but 52% of all the splits it was doing involved X 19,

so it gave it a nice big bar.

The next three would be next.

Then everybody else was very small contribution

or no contribution.

It leads us to say,

"Hey, X 19 might be important and it reinforces X3 being important."

Now that we have that information,

well, how confident are we that these things do belong

in our study?

Here it would be nice to look at X 21 by X 19 broken out.

This one, of course,

is going to be just simply going back into our

graph builder.

This chance we can put the X 19 down on the bottom axis

so that would be the only X.

Let's go ahead and put our X 21 right there on the Y.

We can break that out by the X3 variable, which is pretty cool.

Now, one thing we might want to do, X2 was the part ID.

We can give it some color or some overlay.

Either way, I think I will just go ahead and give it some color here

and I will turn off the line.

That's helpful.

But what would be helpful is to use that local data filter

that Allison showed,

in case they want to really look at a specific sequence of parts.

I'll go under the red hotspot there, that red triangle.

I'll go local data filter and then we'll add the X2,

and beautiful.

Now we can go and just change up our view by that local data filter.

That was a cool view that we've got.

I can see that it's making a lot of differences there.

Now one thing you might ask is could we even model this?

Before I even go and model it so we can make some predictions,

how sure am I that X3 and X 19 really are affecting X 21?

Well, we can actually do a statistical test.

We can test means.

The way we're going to do that here is we are going to go back to our data.

We're just going to go to Analyze,

fit Y by X

and now we're going to go into our output.

We want to look at things either the effect on X 21

by those things we care about, X3 and X 19.

I'm going to put them both in here

and it's going to give me some different views.

It's going to enable me to compare means in this one way analysis.

I'm going to right click, I'm going to turn on the means test.

I'm going to right click here.

I even like this all pairs too key.

I'm going to adjust our axis here.

It's got these cool means diamonds.

The middle of your diamond is the mean.

The edges is your 95% confidence around the mean.

The way it works,

if you would slide these things over, would they overlap?

It looks like they would pass like ships in the night.

There's no overlap.

As well as you got these comparison circles,

you can click on one and see if the other one turns a different color.

A ll this is based off a 0.5 alpha.

What does that mean?

That's your confidence so that's 95% confident.

We'd be right 95 times out of 100 to say input X3 does have the level there,

is having an effect on what my observed measures are for X 21.

Given that before I go and try to fit a line or a curve line,

I can go under this red triangle hotspot.

I can go, you know what, let's go ahead and group by X3 .

Now when I go back under this triangle option

and I go to fit a line, or in this case, I know there's a little curvature,

so I'm going to fit a quadratic line or polynomial line.

Now it broke it out by X-3

so I'm really, really excited about that one.

The blue line, which is his first version, 3-0, there's the formula for it.

It only has 20% explainability.

It's not a great fit,

but you can see that jumped up to near 70% predictability for the 3_1.

It's telling me that I've got not only significance in saying X3 is different

and I'm seeing a difference when it comes to X 19 by X 21,

but it matters for X 19 what level of X3 we're talking about.

That's why the red line and the blue line are not on top of each other.

Therefore, that's an interaction.

If I'm going to try to predict something, I need to include that.

So at this point,

I think I have all that we're going to need

to do to get in the hands of Allison Spears,

a really cool tool that can help them predict what the output is going to be

based on settings of X3 and X 19.

You're seeing on the screen a profiler that comes off our modeling platform,

and it's very easy to go and set up.

If we go back, I'm going to go to the fit model here.

We'll do our output for X21 again.

Under my inputs, I know X3 and X 19,

there it is, are very important.

I told myself X3 and X 19 might need to be crossed,

I might need to see those interactions.

I know for X19, there's some curvature.

The way I would check for this is I go, I'd select X 19,

I go under this macros, and I'd say polynomial two degree.

I have it set at two so I would get this curve term, polynomial term here.

There's the interaction, and these are the main effects,

so it's really two factors, but it's the four things in my model.

So I'm just going to run it and it's going to try to fit a line.

This should look very much like the fit Y by X.

It's only really explaining 52%.

This model is only explaining 52% of the differences I'm seeing in x 21.

Not perfect, but it's pretty much,

think about it,

just for having two factors and their interaction

in one of their curve terms, that's pretty good.

But what I can do now under that red hotspot is turn on the profiler.

This is worth the price of admission.

This right here is going to enable Allison and her team

to sit there and talk about what settings we should have.

S hould I be at

v3_1 ?

Should I be at v3_0

for this x_3 input?

Should I be low or high?

A gain, it shows that interaction live.

For example,

I'll shift this color here.

Watch what happens when I'm low.

I'm sorry, not low but high on X 19, I'm way out here to 500.

By the way, you can type in what you care about.

Maybe I want to see what it's at 480.

Look how flat that line is between _0 and _1.

It doesn't really matter which one I select.

I'm going to get the same kind of prediction.

The red is my prediction, and the blue around it here

is my confidence interval around that prediction.

Of course, this wouldn't be good

because I'm right on the lower spec limits.

Watch what happens when I start to pull it.

Well, I might be happier here with version 3-0 in a setting around 350

because that gets me close to the targets.

But if I keep going up here, you see how steep this line is begin,

and I definitely don't want to be on version 3_1.

Because it has a steeper line and it has made this slope very steep.

It's all coming out, but it's interactive in this profiler

and now we can play with what would be the right settings for

if I had to stay with version 3-1.

If I go to version 3-0, what would be the right settings here?

They might be different settings.

There's always multiple optimal settings you can select.

This is really cool.

We now have the ability to predict.

All right.

Continuous process improvements.

All this was great.

We now have a faster way to get our analysis done.

We've gone through a flow that enable s us to find what's important

and see what's important.

But what if we want to use that information to monitor over time

and continually improve our process?

It might be nice to have for different levels of X-3 a dashboard.

Allison and I worked to create a standard type of dashboard

that her team is used to seeing.

They're used to seeing control charts first,

and then the process capability around their specs.

Then next they would want to see the output over time.

That's the top chart in the middle there

and then below that if there's anything else they should worry about.

That was our big finding, that, "Hey, x 19 has an effect,"

so they would want to see that.

Lastly, on the right hand side, we put a table with the average means

for the output of interest, plus even some more outputs they like to take a look at.

Of course, we want this to be interactive.

So how can we build this dashboard for level zero and level one?

We're going to bring up our data here.

I think I already have it opened up here.

I will go now and just create in just one swoop.

This is why it's nice to be able to save your graphs, your analysis back to data.

I'm just going to click and create a whole bunch of views here

that are going to replicate what the team wants to see.

Here is that control chart builder for the X bar and R.

Next, we have the process capability.

Next, we have that output over time.

Next, we have the output over the X 19 that we wanted to show.

Now we have the table.

I have all the elements, and if you have all the elements,

you don't have to save them back to make someone run them one at a time.

You can combine them into a dashboard template

and it's under File, New Dashboard.

It will allow you to pick some type of template to start off with.

I'm just going to pick this blank template.

Now it's got all my reports, all the graphs and tables

and things I've opened on the left.

Now I can just bring into the body of the dashboard what I care about

and I can orient things the way I would like to see them on my dashboard.

When I'm done, it's easy to go and run that dashboard

and then later save that dashboard when I'm ready.

But I've already got that run here.

So I'm going to close down the dashboard builder.

I'm going to show you the dashboard we have already created

to capture all this information.

With one click of the button here's, my dashboard.

And boy, beautiful looking dashboard here, just the way I want to see it.

Now, the thing that we loved about this

was your ability as well to still use the junk dynamic linkage.

I can select a couple of high points

and I can see where they will flow in the other graphs.

I can even see down here where t hey're highlighted to my table.

So this is great, but what about that X3 variable?

We knew we wanted to be able to create separate dashboards for each of those.

So instead of using a local data filter, I'm going to use a global data filter.

It's actually under your Rose venue.

It's right at the bottom.

This one affects all graphs, all analysis.

It affects what's hidden and selected back to your data table.

On this one, I'll just go ahead and put X3 .

Now when I click on Show and Include,

I'll turn off the select so I can make my own selections.

Now I can toggle between that _0 and that _1 .

Now it works the same way.

I can see things that were out of control or out of spec here for just version 3-0,

then I can do the same thing for 3-_1.

There we go.

We have a nice tool that can be really used to, again, not get data just quicker

and not just do one analysis,

but actually make this a continuous process improvement tool

that we can use day in and day out to quickly get the view we want

and ask the questions we need to drive improvements.

All right.

So that is our story of moving from data to story, I should say.

We wanted to leave you with where to learn more,

where to get more information.

Of course, we're going to give you the presentation.

We're going to give you the journal we use

so you can replicate these views we're seeing.

But Allison and I felt that if you were wanting to really get started with JMP,

go to the Getting Started with JMP webinars that we have.

So it's on the JMP website, will include links in the journal,

and it covers about everything we showed you today.

We had a few more tips and tricks,

but the new user welcome kit is another really good thing to take.

This one allows you to work with a data set,

it gives you a data set that you can follow along,

and it's really nice step- by- step instructions.

We're both big fans of the Statistical Thinking for Industrial Problem Solving.

Free online learning, basically E-learning course,

and you have so many different places you can do.

I've used this to do just in time learning,

and I've had a lot of people t ake all the sections just to get up to speed

on everything JMP can do to help you compare and describe

and predict all those fun things you want to do.

Don't forget, if you have specific things you want to do,

we do have Mastering JMP webinars that are available here.

The JMP community, communityjmp .com is a good place to look

for just in time learning,

and as well, JMP Education,

if you want to get more of the underlying theory

on how a lot of these things work.

We do a lot of public training or can customize training for you as well

Just talk to JMP Education.

All right.

I will allow Allison to say a few words when we finish.

But thanks, everybody, for joining us,

and we hope you picked up on a few things you would like to try within JMP.

Thanks, Scott.

Thanks, everyone, for joining us.

It was really a pleasure to share this case study from Coherent with you

and to share all the new cool tricks that Scott has taught me

and that we've learned through our journey with JMP at Coherent.

So thanks again and take care.

Bye.

Published on ‎05-20-2024 07:54 AM by | Updated on ‎07-23-2025 11:13 AM

We don’t live in a static world. Dynamic visualization and visual management are essential elements of Lean Six Sigma; they link data and problem solving. As with detective work, it is important to be able to spot clues and patterns of behavior in a situation. Establishing a visual environment enables rapid processing of large data sets, which leads to quick detection of trends and outliers. The goal of Lean is elimination of waste. Waste is present in many forms, such as waiting for information, moving data to multiple sources, and over-processing data. Data visualization allows for reduction of these waste streams.

This presentation provides a real-life case study where JMP is utilized to help “move the data to a story” in a visual way that aids in communicating information, eliminating waste, and driving continuous improvement. This case highlights the use of JMP tools, such as Excel import, Query Builder, Graph Builder, data filters, control charts, basic modeling, reporting, and dashboards. The presentation also explains how visual management helped engage and empower employees throughout the organization.

Hi, everyone.

Thank you for joining us for our presentation

of From Data to Story:

Using visualization to drive continuous improvement.

My name is Allison Bankaitis, and my co- presenter is Scott W ise.

A little bit about myself.

I currently supervise a small team of process engineers

at Coherent Incorporated,

but I'm still very involved in the daily process engineering efforts.

Previously, I held various process engineering roles

at Corney Incorporated, and I'm very excited to show a case study

of how we view some of these JMP tools in our process engineering work.

A little bit for Scott.

Thank you, Allison.

I'm Scott Wise.

I'm from JMP in support Allison's JMP usage,

as well as other customers in the Northern California area.

And I'm just real excited to be a part of this really cool case study.

Hopefully, you'll pick up a lot of best practices

and tips from some of the things that helped us.

All right.

I placed the abstract here for future reference,

but just wanted to highlight a few things.

Coherent has placed a recent focus on Lean,

which aims to eliminate waste.

Tools from JMP have aided data visualization,

which in turn has enabled reduction of waste.

Another advantage of these tools is the ability to engage

and empower employees throughout the organization.

These areas will be the focus of this presentation.

Our first section is about eliminating waste in the data collection process.

In this case study,

we had a data collection process with unnecessary complexity.

It used to take 20 minutes to process one part.

So to do this, we had built a data query and access.

This is just a screenshot here showing an example of a few data tables

where we combined variables from various tables

to get the output that we are looking for.

And then we used a macro to pull data for an individual part into Excel.

This is again,

just a screenshot of an example database connection in Excel

and the code that we would write in Excel.

This was, again, done for each individual part.

From that data, then we could attribute data in Excel.

We could then calculate average values of each attribute.

We would then pull additional data from our MES website,

such as part type or other items listed here.

Then all that data was copied into an Excel summary log.

So we maintain the log,

but we weren't really doing anything to track or analyze the data.

With JMP, I was able to streamline the process.

I built this framework in about an hour

and reduced process time to five minutes per part.

And in this case,

this is just for one engineer myself, on one product that I've worked on.

But if we can extend this to multiple products

and multiple engineers,

we could really gain a lot large savings of time.

So to do this, I built a data query in JMP,

which included both the attribute and MES data in one location.

Just a screenshot of there'd be several tables here pulling in the data,

the different variables here,

we can do some initial filtering in the data query.

So in this case I selected a time frame that I wanted to focus on

and then a subset of variable that I down selected,

so I don't manage both data set.

And then I can build the data table here

and always clean up more of the data later on as necessary.

After I had the data,

I replicated some of the charts that we already had in Excel,

just made them very similar so that people could see

what they're used to dealing with for the time being.

After that, again,

I built the summary table to replicate what they were used to seeing.

I calculated the average attribute data and merged the original table

and the tabulated table into one summary table

so that they could have an output what they are used to seeing.

The next thing is to take this data

and move from data to story.

So to do this, the first thing I was curious to know

was what does the data tell us about current performance.

So I plotted the data over time as my first aspect

and I will show that to you in JMP.

So just using this graph builder

and the timestamp that I chose

and then the main output that I started looking at,

added that to the chart here.

What I used to do is manually go in here

and add reference lines using this field here.

But what Scott showed me, which is really neat

and then extends to all of the graphs,

is that you can add it directly to the data table,

you can add the spec limits.

You just go into the Variable of Interest, Column Properties

and go down to Spec Limits and add the values in here.

This is checked so that you can see the graph reference lines on each graph.

So once I had that output,

I could see that there is a large amount of variation in the data

and many of the values were outside of the spec limit.

So the next thing I wanted to do was compare additional variables.

So to do that pretty quickly, I was able to just add

the column switcher to this graph I already had

by going here and selecting the variable I wanted to change

along with the other variables that I trust.

Then from here I can quickly click through all these variables

and see the variation in each one.

Next for me, I have some process knowledge

and I'm sure you would have process knowledge of your situation as well.

Based on this process knowledge, I was able to select a variable

that I thought might be driving some of the variation.

In my case,

I thought that X3 might be responsible

for driving some of these trends that I was seeing.

I put that into our graph here.

The other piece of process knowledge that I have

is that our spec is based on the average value for each part.

I changed this to mean and then I was curious to see

the line of fit over time, so I put that and added that here as well.

The next thing I was interested in seeing

is a little bit more about performance by looking at a control chart.

To see the control chart, analyze quality and Process Control chart builder,

and I was curious to see it against X2 , which is a part number

and that same variable that I've been looking at.

A gain, I was going to split it out by X3.

From here, we can see there's a shift in the average

based on which subset of X3 .

A lso the thing that was obvious to me is that the sample sizes were uneven.

To me, knowing the process,

I know they should each have 10 collections of data for each part.

So based on our process, I said,

well, to get an initial look at the performance,

I'm going to limit to only parts that have 10 measurements per.

To do that, we made a new data table,

cleaned up the data again, and once I had this,

I recreated the control chart

with just a small change.

Then here I added a local data filter

to have X3 split out on two separate graphs.

That was my learning.

Now I can see these upper and lower control limits

and this process capability chart,

since now I have the even subgroup sample size of tech.

That's where I will hand it over to Scott.

Thank you very much.

All right.

I'm going to pick up with the rest of the story.

Allison has done a great job of understanding

where the current performance of her process was.

But we also thought there might be some other key variables within her data

that could be useful for explaining these differences we're seeing in the output.

One of the things that we tried was actually a modeling tool

that's very simple and often used to screen for important variables.

It's called a partition.

In this partition,

all you have to do is of course,

you're going to pull up your data

and then it is under predictive modeling.

People call this a decision tree,

and I'll show you why when we start to fill it out.

But all you got to do right now is give it an output that's our 21 there,

X 21,

and get it the inputs we want.

I'm going to put all the inputs in except for X2 ,

which was the kind of a part ID.

I'm going to remove that one.

There was another one that Allison recommended I removed,

given her process knowledge, and that was X8.

But we'll leave all the others in.

When I say Okay,

it brings up the start of a decision tree.

What it's doing is saying, I can make a bunch of splits

and I'm going to look at all the inputs and I'm going to try to find a cut point.

We'll breaking basically any of those variables into two groups.

Will that give any explanatory value

toward the differences I'm seeing in the output?

In this case, X 21.

So if you make the first split, it's saying that I've explained 27%

of all the difference you're seeing in the output

via just splitting X 19 at 500.

If it's greater or equal to 500,

I'm going to have a much lower mean of 12. 67.

If it's less than 500, watch out, it jumps up to 13.

This is really cool to find other things I might want to split,

break, view on my graphs.

You can continue splitting and it will look at other variables

like X3 came into place here

and Allison already knew

that was going to be an important variable.

A s you keep splitting,

you can see it starts to add in terms of the predictability.

This RSquare, the closer to one, the more predictable.

So it's like 56% predictability here.

I've gone ahead and done that, I'll show you what that view looks like.

Here's the finished view I came up with.

I've got these nice big column contribution bars here at the bottom.

You can see that X 19 got split.

Actually found five cut points for X 19,

but 52% of all the splits it was doing involved X 19,

so it gave it a nice big bar.

The next three would be next.

Then everybody else was very small contribution

or no contribution.

It leads us to say,

"Hey, X 19 might be important and it reinforces X3 being important."

Now that we have that information,

well, how confident are we that these things do belong

in our study?

Here it would be nice to look at X 21 by X 19 broken out.

This one, of course,

is going to be just simply going back into our

graph builder.

This chance we can put the X 19 down on the bottom axis

so that would be the only X.

Let's go ahead and put our X 21 right there on the Y.

We can break that out by the X3 variable, which is pretty cool.

Now, one thing we might want to do, X2 was the part ID.

We can give it some color or some overlay.

Either way, I think I will just go ahead and give it some color here

and I will turn off the line.

That's helpful.

But what would be helpful is to use that local data filter

that Allison showed,

in case they want to really look at a specific sequence of parts.

I'll go under the red hotspot there, that red triangle.

I'll go local data filter and then we'll add the X2,

and beautiful.

Now we can go and just change up our view by that local data filter.

That was a cool view that we've got.

I can see that it's making a lot of differences there.

Now one thing you might ask is could we even model this?

Before I even go and model it so we can make some predictions,

how sure am I that X3 and X 19 really are affecting X 21?

Well, we can actually do a statistical test.

We can test means.

The way we're going to do that here is we are going to go back to our data.

We're just going to go to Analyze,

fit Y by X

and now we're going to go into our output.

We want to look at things either the effect on X 21

by those things we care about, X3 and X 19.

I'm going to put them both in here

and it's going to give me some different views.

It's going to enable me to compare means in this one way analysis.

I'm going to right click, I'm going to turn on the means test.

I'm going to right click here.

I even like this all pairs too key.

I'm going to adjust our axis here.

It's got these cool means diamonds.

The middle of your diamond is the mean.

The edges is your 95% confidence around the mean.

The way it works,

if you would slide these things over, would they overlap?

It looks like they would pass like ships in the night.

There's no overlap.

As well as you got these comparison circles,

you can click on one and see if the other one turns a different color.

A ll this is based off a 0.5 alpha.

What does that mean?

That's your confidence so that's 95% confident.

We'd be right 95 times out of 100 to say input X3 does have the level there,

is having an effect on what my observed measures are for X 21.

Given that before I go and try to fit a line or a curve line,

I can go under this red triangle hotspot.

I can go, you know what, let's go ahead and group by X3 .

Now when I go back under this triangle option

and I go to fit a line, or in this case, I know there's a little curvature,

so I'm going to fit a quadratic line or polynomial line.

Now it broke it out by X-3

so I'm really, really excited about that one.

The blue line, which is his first version, 3-0, there's the formula for it.

It only has 20% explainability.

It's not a great fit,

but you can see that jumped up to near 70% predictability for the 3_1.

It's telling me that I've got not only significance in saying X3 is different

and I'm seeing a difference when it comes to X 19 by X 21,

but it matters for X 19 what level of X3 we're talking about.

That's why the red line and the blue line are not on top of each other.

Therefore, that's an interaction.

If I'm going to try to predict something, I need to include that.

So at this point,

I think I have all that we're going to need

to do to get in the hands of Allison Spears,

a really cool tool that can help them predict what the output is going to be

based on settings of X3 and X 19.

You're seeing on the screen a profiler that comes off our modeling platform,

and it's very easy to go and set up.

If we go back, I'm going to go to the fit model here.

We'll do our output for X21 again.

Under my inputs, I know X3 and X 19,

there it is, are very important.

I told myself X3 and X 19 might need to be crossed,

I might need to see those interactions.

I know for X19, there's some curvature.

The way I would check for this is I go, I'd select X 19,

I go under this macros, and I'd say polynomial two degree.

I have it set at two so I would get this curve term, polynomial term here.

There's the interaction, and these are the main effects,

so it's really two factors, but it's the four things in my model.

So I'm just going to run it and it's going to try to fit a line.

This should look very much like the fit Y by X.

It's only really explaining 52%.

This model is only explaining 52% of the differences I'm seeing in x 21.

Not perfect, but it's pretty much,

think about it,

just for having two factors and their interaction

in one of their curve terms, that's pretty good.

But what I can do now under that red hotspot is turn on the profiler.

This is worth the price of admission.

This right here is going to enable Allison and her team

to sit there and talk about what settings we should have.

S hould I be at

v3_1 ?

Should I be at v3_0

for this x_3 input?

Should I be low or high?

A gain, it shows that interaction live.

For example,

I'll shift this color here.

Watch what happens when I'm low.

I'm sorry, not low but high on X 19, I'm way out here to 500.

By the way, you can type in what you care about.

Maybe I want to see what it's at 480.

Look how flat that line is between _0 and _1.

It doesn't really matter which one I select.

I'm going to get the same kind of prediction.

The red is my prediction, and the blue around it here

is my confidence interval around that prediction.

Of course, this wouldn't be good

because I'm right on the lower spec limits.

Watch what happens when I start to pull it.

Well, I might be happier here with version 3-0 in a setting around 350

because that gets me close to the targets.

But if I keep going up here, you see how steep this line is begin,

and I definitely don't want to be on version 3_1.

Because it has a steeper line and it has made this slope very steep.

It's all coming out, but it's interactive in this profiler

and now we can play with what would be the right settings for

if I had to stay with version 3-1.

If I go to version 3-0, what would be the right settings here?

They might be different settings.

There's always multiple optimal settings you can select.

This is really cool.

We now have the ability to predict.

All right.

Continuous process improvements.

All this was great.

We now have a faster way to get our analysis done.

We've gone through a flow that enable s us to find what's important

and see what's important.

But what if we want to use that information to monitor over time

and continually improve our process?

It might be nice to have for different levels of X-3 a dashboard.

Allison and I worked to create a standard type of dashboard

that her team is used to seeing.

They're used to seeing control charts first,

and then the process capability around their specs.

Then next they would want to see the output over time.

That's the top chart in the middle there

and then below that if there's anything else they should worry about.

That was our big finding, that, "Hey, x 19 has an effect,"

so they would want to see that.

Lastly, on the right hand side, we put a table with the average means

for the output of interest, plus even some more outputs they like to take a look at.

Of course, we want this to be interactive.

So how can we build this dashboard for level zero and level one?

We're going to bring up our data here.

I think I already have it opened up here.

I will go now and just create in just one swoop.

This is why it's nice to be able to save your graphs, your analysis back to data.

I'm just going to click and create a whole bunch of views here

that are going to replicate what the team wants to see.

Here is that control chart builder for the X bar and R.

Next, we have the process capability.

Next, we have that output over time.

Next, we have the output over the X 19 that we wanted to show.

Now we have the table.

I have all the elements, and if you have all the elements,

you don't have to save them back to make someone run them one at a time.

You can combine them into a dashboard template

and it's under File, New Dashboard.

It will allow you to pick some type of template to start off with.

I'm just going to pick this blank template.

Now it's got all my reports, all the graphs and tables

and things I've opened on the left.

Now I can just bring into the body of the dashboard what I care about

and I can orient things the way I would like to see them on my dashboard.

When I'm done, it's easy to go and run that dashboard

and then later save that dashboard when I'm ready.

But I've already got that run here.

So I'm going to close down the dashboard builder.

I'm going to show you the dashboard we have already created

to capture all this information.

With one click of the button here's, my dashboard.

And boy, beautiful looking dashboard here, just the way I want to see it.

Now, the thing that we loved about this

was your ability as well to still use the junk dynamic linkage.

I can select a couple of high points

and I can see where they will flow in the other graphs.

I can even see down here where t hey're highlighted to my table.

So this is great, but what about that X3 variable?

We knew we wanted to be able to create separate dashboards for each of those.

So instead of using a local data filter, I'm going to use a global data filter.

It's actually under your Rose venue.

It's right at the bottom.

This one affects all graphs, all analysis.

It affects what's hidden and selected back to your data table.

On this one, I'll just go ahead and put X3 .

Now when I click on Show and Include,

I'll turn off the select so I can make my own selections.

Now I can toggle between that _0 and that _1 .

Now it works the same way.

I can see things that were out of control or out of spec here for just version 3-0,

then I can do the same thing for 3-_1.

There we go.

We have a nice tool that can be really used to, again, not get data just quicker

and not just do one analysis,

but actually make this a continuous process improvement tool

that we can use day in and day out to quickly get the view we want

and ask the questions we need to drive improvements.

All right.

So that is our story of moving from data to story, I should say.

We wanted to leave you with where to learn more,

where to get more information.

Of course, we're going to give you the presentation.

We're going to give you the journal we use

so you can replicate these views we're seeing.

But Allison and I felt that if you were wanting to really get started with JMP,

go to the Getting Started with JMP webinars that we have.

So it's on the JMP website, will include links in the journal,

and it covers about everything we showed you today.

We had a few more tips and tricks,

but the new user welcome kit is another really good thing to take.

This one allows you to work with a data set,

it gives you a data set that you can follow along,

and it's really nice step- by- step instructions.

We're both big fans of the Statistical Thinking for Industrial Problem Solving.

Free online learning, basically E-learning course,

and you have so many different places you can do.

I've used this to do just in time learning,

and I've had a lot of people t ake all the sections just to get up to speed

on everything JMP can do to help you compare and describe

and predict all those fun things you want to do.

Don't forget, if you have specific things you want to do,

we do have Mastering JMP webinars that are available here.

The JMP community, communityjmp .com is a good place to look

for just in time learning,

and as well, JMP Education,

if you want to get more of the underlying theory

on how a lot of these things work.

We do a lot of public training or can customize training for you as well

Just talk to JMP Education.

All right.

I will allow Allison to say a few words when we finish.

But thanks, everybody, for joining us,

and we hope you picked up on a few things you would like to try within JMP.

Thanks, Scott.

Thanks, everyone, for joining us.

It was really a pleasure to share this case study from Coherent with you

and to share all the new cool tricks that Scott has taught me

and that we've learned through our journey with JMP at Coherent.

So thanks again and take care.

Bye.



0 Kudos