I was recently at ALT 2012, a conference on reliability, hosted by INSA Rennes in France where I studied physics. I greeted the conference organizer, who took a look at me and seemed puzzled. He had taught me a while back and remembered me only as a student. He struggled to see me in a different way.
That theme appeared again in discussions about degradation analysis. One of the presentations during the conference was given by Dr. William Q. Meeker on accelerated repeated measures degradation tests. Another was delivered by Dr. Chris Gotwalt, who is director of JMP Statistical Research and Development. Chris’ team develops the platforms under the Analyze menu in JMP. Chris himself has made numerous contributions to the product, having developed platforms like the Neural platform as well the underlying numerical algorithms for fitting many statistical models and constructing optimal design experiments.
Chris’ paper showed how the degradation of a solar panel can accurately be modeled using a combination of nonlinear models. Telling a compelling story, Chris shared his thought process and explained very nicely how using a combination of exponential decay and sine-wave with the square root of time resulted in an accurate prediction.
The model predicts accurately after only a year of observation, using data early in the life of the solar array that are normally thrown out in degradation analyses. Accurate, early forecasting models like the one Chris presented can be used by solar technology manufacturers to predict warranty returns, and their customers can plan in advance for replacement of degrading cells.
However, some physicists I talked to have an issue with that because the model used is not based on any particular physics theories, but rather an empirical model aimed at describing a phenomenon. I think that these different approaches need not be opposed one another but rather should be seen as different ways to look at the same thing. They too seemed puzzled!