cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Try the Materials Informatics Toolkit, which is designed to easily handle SMILES data. This and other helpful add-ins are available in the JMP® Marketplace
Choose Language Hide Translation Bar
nina_chan
Staff
Bill Meeker on reliability in the age of big data

William Q. Meeker is an expert on reliability analysis.William Q. Meeker is an expert on reliability analysis.Reliability analysis is important in all areas on product manufacturing. In the age of big data and machine learning, it is crucial to find the relevant information to make good decisions and ensure longevity. Bill Meeker, reliability expert and Professor of Statistics and Distinguished Professor at Iowa State University, will introduce new reliability and Bayesian methods based on case studies in a complimentary seminar on Sept. 13. You can sign up for the in-person event in Stockholm, Sweden, or follow the live stream online. I asked Bill Meeker a few questions about big data, pitfalls in reliability analysis and the latest developments in the area

Analytics is a hot topic at the moment. Buzzwords are data science, AI, machine learning, Industry 4.0, big data. How does reliability analysis relate to this?  

Yes, these are very exciting times for all of us who are working with data, especially now that we have easy-to-use software tools to work with the data. Reliability data come from two different sources: laboratory accelerated tests and warranty or other field data. Based on some recent experiences, Yili Hong and I wrote a paper "Reliability Meets Big Data: Opportunities and Challenges" (Quality Engineering 2014) describing how reliability field data are changing dramatically. Because many systems are being outfitted with sensors and communications capabilities, field data now contain (and sometimes provide in real time) detailed information about how a system is being operated (including environmental variables like temperature, shocks that the system has experienced, amount of use, and so forth). The resulting data sets providing these time-varying covariates are huge relative to traditional reliability field data that only report failure times for units that failed and running times for those that have not failed. We have seen applications where the proper use of such data can reduce the effective amount of extrapolation that is needed to make predictions and also provides the ability to make predictions about individual units, as opposed to the population of units.

There have been huge successes in the use of machine learning/AI techniques for certain applications (Google Translate and the control of autonomous vehicles are two that immediately come to mind). I have, however, also heard about dismal failures in other applications, particularly in the area of reliability prediction for the purposes of making decisions for the operation and maintenance of fleets of complicated systems. I suspect that the reason is that the successes have been in applications where the machine learning techniques were solving a complicated interpolation problem. Most of the needed information is in the big data, and the machine learning techniques are able to extract and use the needed information. In many prediction applications, however, there is the need for extrapolation beyond the range of the data. There may be big data involved, but the amount of needed information is limited. It is necessary to combine subject-matter knowledge (e.g., about the physics of failure) with the limited information in the big data. This is a subject of current research. 

In which industries is reliability analysis most important? For which products or processes should it be used? 

Reliability is (or should be) important in almost all areas of manufacturing and complicated system operations. Two areas that have always had strong focus on reliability are aerospace and nuclear power generation, for obvious reasons of safety. To some extent, high reliability has been achieved in these areas by using conservative engineering designs, although we all know about the well-known failures (e.g., Challenger, UA232, Columbia, Three-Mile Island, Chernobyl, and Fukushima).

Of course, safety is also important in products like automobiles, computers, and cell phones (recall the laptop battery and cell phone battery reliability problems), but also because consumers expect that the products that they purchase should have high reliability. For these products, there are a highly competitive markets, and there is a need to have high reliability without excessive cost. The company that has an appealing product coupled with high reliability without excessive cost will be most successful. Finding the right balance requires appropriate reliability analyses and related decision making. 

What are the greatest pitfalls to be aware of? 

There are many potential pitfalls in reliability analysis. I have coauthored two papers specifically about the pitfalls of accelerated testing (IEEE Transaction on Reliability in 1998 and The Journal of Quality Technology in 2013). One of the most common pitfalls of accelerated testing is testing at levels of stress that cause failure modes that would never be seen in actual applications. Interestingly, if not recognized, such failures can cause seriously incorrect optimistic predictions of product lifetimes.

In the prediction of warranty claims, a serious pitfall is to assume that a distribution that fits well with early returns (say the first six months that a product in the field) can be extrapolated to predict the proportion of units that will fail in a three-year warranty period. Generally, when extrapolating, it is necessary to use knowledge of the physics of failure to justify any assumptions about the form of the lifetime distribution.  

What are the latest developments in this area? What are the new methods that you will introduce on Sept.13 in Stockholm? 

Over the past 25 years, there has been a tremendous increase in the use of Bayesian methods in statistical applications. In the area of reliability, however, the increases have not been so dramatic (the work at Los Alamos National Laboratory on system reliability is one notable exception). I think that there are two reasons for this: the inherent conservatism of the reliability discipline and the lack of easy-to-use tools to implement Bayesian methods.

I frequently see reliability applications where the use of Bayesian methods is compelling, and I have been using them more extensively and predict that there will be a more general substantial growth in the use of Bayesian methods in the near future. For the last several versions, JMP has had capabilities to use Bayesian methods to fit life distributions, allowing, for example, engineers to use the knowledge that they often have about the Weibull shape parameter, resulting in better estimation precision (or the potential to run tests with fewer test units).

In JMP 14, there is now the capability to use Bayesian methods for analyzing accelerated life test data. This also is an important advantage because it allows engineers to use, for example, knowledge that they often have about the effective activation energy in temperature-accelerated tests. Although not yet available in JMP, I have been using Bayesian methods to fit models describing repeated measures data from both degradation tests and tests to estimate probability of detection of cracks in structural-health monitoring applications. In these applications, Bayesian methods, even with diffuse prior distributions, have the important advantage of providing easy-to-compute trustworthy confidence intervals for the important quantities of interest. And in situations where prior information is available, it will be easy to integrate the different sources of information.  

You and Chris Gotwalt will speak in greater detail about this topic on Sept. 13 in Stockholm and also introduce some case studies. What can people expect to take away from the seminar? 

In addition to the Bayesian methods now available in the Life Distribution and Fit Life by X platforms, JMP Pro is able to, with just a couple of additional clicks of a mouse, perform bootstrap simulations and fully-parametric simulations of almost any analysis that one can do in JMP (not just the reliability analysis platforms). These powerful simulation-based statistical methods have been developed over the past 40 years and have wide applicability, but have not seen as much use as one might have expected. This is because of the substantial amount of extra programming effort that was previously needed to implement such methods. JMP has solved that problem. Chris and I have developed and will present a series of case studies to illustrate both the power and the simplicity of the JMP capabilities for Bayesian analysis, bootstrap simulation, and fully-parametric simulation.

 

We hope you will sign up for the in-person event in Stockholm, Sweden, or watch the live stream online.

Last Modified: Feb 28, 2019 5:45 PM