Choose Language Hide Translation Bar
GT_Analytics
Level II

Creating a Reliability Modeling & Report Generation App using JSL + JMP Reliability Capabilities (2020-US-30MP-591)

Level: Intermediate

 

Shamgar McDowell, Senior Analytics and Reliability Engineer, GE Gas Power Engineering

 

Faced with the business need to reduce project cycle time and to standardize the process and outputs, the GE Gas Turbine Reliability Team turned to JMP for a solution. Using the JMP Scripting Language and JMP’s built-in Reliability and Survival platform, GE and a trusted third party created a tool to ingest previous model information and new empirical data which allows the user to interactively create updated reliability models and generate reports using standardized formats. The tool takes a task that would have previously taken days or weeks of manual data manipulation (in addition to tedious copying and pasting of images into PowerPoint) and allows a user to perform it in minutes. In addition to the time savings, the tool enables new team members to learn the modeling process faster and to focus less on data manipulation. The GE Gas Turbine Reliability Team continues to update and expand the capabilities of the tool based on business needs.

 

 

 

Auto-generated transcript...

 

Speaker

Transcript

Shamgar McDowell Maya Angelou famously said, "Do the best you can, until you know better. Then when you know better, do better." Good morning, good afternoon, good evening.
I hope you're enjoying the JMP Discovery Summit, you're learning some better way ways of doing the things you need to do.
I'm Shamgar McDowell, senior reliability and analytics engineer at GE Gas Power. I've been at GE for 15 years and have worked in sourcing, quality, manufacturing and engineering.
Today I'm going to share a bit about our team's journey to automating reliability modeling using JMP.
Perhaps your organization faces a similar challenge to the one I'm about to describe. As I walk you through how we approach this challenge, I hope our time together will provide you with some things to reflect upon as you look to improve the workflows in your own business context.
So by way of background, I want to spend the next couple of slides, explain a little bit about GE Gas Power business. First off, our products.
We make high tech, very large engines that have a variety of applications, but primarily they're used in the production of electricity.
And from a technology standpoint, these machines are actually incredible feats of engineering with firing temperatures well above the melting point of the alloys used in the hot section.
A single gas turbine can generate enough electricity to reliably power hundreds of thousands of homes.
And just to give an idea of the size of these machines, this picture on the right you can see there's four adult human beings, which just kind of point to how big these machines really are.
So I had to throw in a few gratuitous JMP graph building examples here. But the bubble plot and the tree map really underscore the global nature of our customer base.
We are providing cleaner, accessible energy that people depend upon the world over, and that includes developing nations that historically might not have had access to power and the many life-changing effects that go with it.
So as I've come to appreciate the impact that our work has on everyday lives of so many people worldwide, it's been both humbling and helpful in providing a purpose for what I do and the rest of our team does each day.
So I'm part of the reliability analytics and data engineering team.
Our team is responsible for providing our business with empirical risk and reliability models that are used in a number of different ways by internal teams.
So in that context, we count on the analyst in our team to be able to focus on engineering tasks, such as understanding the physics that affect our
components' quality and applicability of the data we use, and also trade offs in the modeling approaches and what's the best way to extract value from our data.
These are, these are all value added tasks.
Our process also entails that we go through a rigorous review with the chief engineers. So having a PowerPoint pitch containing the models is part of that process.
And previously creating this presentation entailed significant copying and pasting and a variety of tools, and this was both time consuming and more prone to errors. So that's not value added.
So we needed a solution that would provide our engineers greater time to focus on the value added tasks. It would also further standardize the process because those two things
greater productivity and ability to focus on what matters, and further standardization. And so to that end, we use the mantra Automate the Boring Stuff.
So I wanted to give you a feel for the scale of the data sets we used. Often the volume of the data that you're dealing with can dictate the direction you go in terms of solutions.
And in our case, there's some variation but just as a general rule, we're dealing with thousands of gas turbines in the field,
hundreds of track components in each unit, and then there's tens of inspections or reconditioning per component.
So in in all, there's millions of records that we're dealing with. But typically, our models are targeted at specific configurations and thus, they're built on more limited data sets with 10,000 or fewer records, tens of thousands or fewer records.
The other thing I was going to point out here is we often have over 100 columns in our data set. So there are challenges with this data size that made JMP a much better fit than something like an Excel based approach to doing this the same tasks.
So, the first version of this tool, GE worked with a third party to develop using JMP scripting language.
And the name of the tool is computer aided reliability modeling application or CARMA, with a c. And the amount of effort involved with building this out to what we have today is not trivial.
This is a representation of that. You can see the number of scripts and code lines that testified to the scope and size of the tool as it's come to today. But it's also been proven to be a very useful tool for us.
So as its time has gone on, we've seen the need to continue to develop and improve CARMA over time. And so in order to do this, we've had to grow and foster some in-house
expertise in JSL coding and I oversee the work of developers that focus on this and some related tools.
Message on this to you is that even after you create something like CARMA, there's going to be an ongoing investment required to maintain and keep the app relevant and evolve it as your business needs evolve.
But it's both doable and the benefits are very real. A survey of our users this summer actually pointed to
a net promoter score of 100% and at least 25% reduction in the cycle time to do a model update. So that's real time
that's being saved. And then anecdotally, we also see where CARMA has surfaced issues in our process that we've been able to address that otherwise might have remained hidden and unable to address.
And I have a quote, it's kind of long. But I wanted to just pass this caveat on automation from Bill Gates, on which he knows a thing or two about software development.
"The first rule of any technology used in business is that automation applied to an efficient automation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency."
So that's the end of the quote, but this is just a great reminder that automation is not a silver bullet that will fix a broken process, we still need people to do that today.
Okay, so before we do a demonstration of the tool. I just wanted to give a high level overview of of the tool and the inputs and outputs in CARMA.
And user user has to point the tool to the input files. So over here on the left, you see we have an active models file that's essentially the already approved models. And then we have empirical data.
And then in the user interface, the user does some modeling activities.
And then outputs are running models, so updates to the act of models in a PowerPoint presentation.
And we'll also look at that.
As a background for the data, I'll be using the demo.
I just wanted to pass on that I started with the locomotive data set. And we'll see that JMP provides some sample data. So this is the case, there
and that gives one population. Then I also added into additional population of models.
And the big message here I wanted to pass on is that what we're going to see is all made-up data. It's not real; it doesn't represent the functionality
or the behavior of any of our parts in the field, or it's it's just all contrived. So keep that in mind as we go through the results, but it should give us a way to look at the tool, nonetheless.
So I'm going to switch over to JMP for a second and I'm using JMP 15.2 for the demo. And this data set is simplified compared to what we normally see. But like I said, it should exercise the core functionality in CARMA.
So first, I'm just going to go to the Help menu, sample data, and you'll see the
reliability and survival menu here. So that's where we're going. One of the nice things about JMP is that it has a lot of different disciplines and functionality and specialized tools for that. And so for my case with reliability, there's a lot here, which also lends to
the value of using JMP is a home for CARMA.
But I wanted to point you to the locomotive data set and just show you...
this originally came out of a textbook.
And talks to it here applied life data analysis. So, in that, there's a problem that asks you what the risk is at 80,000 exposures and we're going to model that today in our data set
in an oxidation model is what we've called it, but essentially CARMA will give us an answer. Again, a really simple answer, but I was just going to show you, you can get the same
way by clicking
in the analysis menu.
So we go down to an analyze or liability and survival, life distribution.
Put the time and sensor where they need to go.
We're going to use Weibull
and just the two...so it creates a fit for that data.
Two parameters I was going to point out is the beta, 2.3, and then it's called a Weibull alpha here. In our tool, it'll be called Ada, but 183.
Okay, so we see how to do that here.
Now just to jump over,
want to look at a couple of the other files, the input files so
I will pull those up.
Okay, this is the model file.
I mentioned I made three models. And so these are the active models that we're going to be comparing the data against.
You'll see that oxidation is the first one, I mentioned that, and then
you want...one also, in addition to having model parameters, it has some configuration information. This is just two simple things here
(combustion system, fuel capability) I use for examples, but there's many, many more columns, like it. But essentially what CARMA does, one of the things I like about it is when you have a large data set with a lot of different varied configurations, it can go through and find
which of those rows of records applies to your model and do the sorting real time, and you know, do that for all the models that you need to do in the data set. And so that's what we're going to use that to demonstrate.
Excuse me.
Also, just look, jump over to the empirical data for a minute.
And
just a highlight, we have a sensor, we have exposures, we have
the
interval that we're going to evaluate those exposures at,
modes,
and then these are the last two columns I just talked about, combustion system and fuel capability.
Okay, so let's start up CARMA.
As an add in, so I'll just get it going.
And you'll see I already have it pointing to the location
I want to use.
And
today's presentation, I'm not gonna have time to talk through all the variety of features that are in here. But these are all things that can help you take and look at your data and decide the best way to model it, and to do some checks on it before you finalize your models.
For the purposes of time, I'm not going to explain all that and demonstrate it, but I just wanted to take a minute to build the three models we talked about create a presentation so you can see that that portion of the functionality.
Excuse me, my
throat is getting dry all the sudden so I have to keep drinking; I apologize for that.
So we've got oxidation.
We see the number of failures and suspensions.
That's the same as what you'll see in the text.
Add that.
And let's just scroll down for a second. That's
first model added
Oxidation. We see the old model had 30 failures, 50 suspensions. This one has 37 and 59.
The beta is 2.33, like we saw externally and the ADA is 183.
And the answer to the textbook question, the risk of 80,000 exposures
is about 13.5% using a Weibull model. So that's just kind of a high level of a way to do that here.
Let's look at also just adding the other two models.
Okay, we've got cracking, I'm adding in creep.
And you'll see in here
there's different boxes presented that represent like the combustion system or the fuel capability, where for this given model, this is what the LDM file calls for.
But if I wanted to change that, I could select other configurations here and that would result in changing my rows for FNS as far as what gets included or doesn't.
And then I can create new populations and segment it
accordingly.
Okay, so we've gotten all three models added
and I think, you know, we're not going to spend more time on that, just playing with the models as far as options, but I'm gonna generate a report.
And I have some options on what I want to include into the report.
And I have a presentation and this LDM input is going to be the active models, sorry, the running models that
come out as a table.
All right, so I just need to
select the appropriate folder
where I want my presentation to go
And now it's going to take a minute here to go through and and generate this report. This does take a minute. But I think what I would just contrast it to is the hours that it would take normally to do this same task,
potentially, if you were working outside of the tool. And so now we're ready to finalize the report.
Save it.
And
save the folder and now it's done. It's, it's in there and we can review it.
The other thing I'll point out, as I pull up, I'd already generated this
previously, so I'll just
pull up
the file that I already generated and we can look through it. But there's, it's this is a template. It's meant for speed, but this can be further customized after you make it, or you can leave placeholders, you can modify the slides after you've generated them.
It's doing more than just the life distribution modeling that I kind of highlighted initially. It's doing a lot of summary work, summarizing the data included in each model, which, of course, JMP is very good for.
It, it does some work comparing the models, so you can
do a variety of statistical tests. Use JMP. And again, JMP is great at that. So that, that adds that functionality.
Some of the things our reviewers like to see and how the models have changed year over year, you have more data, include less. How does it affect the parameters? How does it change your risk numbers?
Plots of course you get a lot of data out of scatter plots and things of that nature.
There's a summary that includes some of the configuration information we talked about, as well as the final parameters.
And it does this for each of the three models,
as well as just a risk roll up at the end
for for all these
combined.
So that was a quick
walkthrough. The
demo.
I think we we've covered everything I wanted to do.
Hopefully we'll get to talk a little more in Q&A if you have more questions. It's hard to anticipate everything.
But I just wanted to talk to some of the benefits again. I've mentioned this previously, but we've seen productivity increases as a result of CARMA, so that's a benefit.
Of course standardization our modeling process is increased and that also allows team members who are newer to focus more on the process and learning it versus working with tools,
which, in the end, helps them come up to speed faster. And then there's also increased employee engagement by allowing engineers to use
their minds where they can make the biggest impact.
So I also wanted to be sure to thank Melissa Seely, Brad Foulkes, Preston Kemp and Waldemar Zero for their contributions to this presentation. I owe them a debt of gratitude for all they've done in supporting it.
And I want to thank you for your time. I've enjoyed sharing our journey towards improvement with you all today. I hope we have a chance to connect in the Q&A time, but either way, enjoy the rest of the summit.