cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Browse apps to extend the software in the new JMP Marketplace
Choose Language Hide Translation Bar
FR60
Level IV

Model prediction

Hi

I'm working on a database with about 3000 records and about 100 predictors. The response is an electrical parameter with a good Gaussian distribution. The database is a multivariate time series because both the response and predictors are time dependent. The predictors are collected along the production line at a particular process step while the electrical parameter is measured about 2 weeks later. The idea here is to predict the electrical measurement by using the machine in line data (predictors). 

As first model I used a Random forest. The R2 and error were really good for both training and test groups. Anyway when I tried to deploy the model in production on data never seen before the model performance was very bad. I was able to explain this behavior with weakness of random forest to extrapolate. Then excluding the model tree based the only choice remain the polynomial fit and the neural network. The firs one was not acceptable in terms of R2 and error, while the NN with 2 layers and 3 activation functions was able to returns a good R2 and error on both training and test data. Then once I build the model I deployed it in production on data never seen (asterisk in graph builder). Even though the trend prediction is correct, the single values not. In particular the NN on data never seen shows a very high variability. So at the end of this story I don't have any other model in JMP to test. At least this is what I think. Can someone give me some trick or suggestion to solve this issue?

In the attached ppt all plots.  

 

Thanks  Felice

7 REPLIES 7
P_Bartell
Level VIII

Re: Model prediction

Here's a couple thoughts for you:

 

1. What do you know about your measurement system variation? If close to 'nothing'...I'd detour your study of causal/predictive modeling to include a measurement system variability study. The two week lag between production data collection and measurement of response begs a drifting or highly variable measurement system from training/validation to extrapolation to future predictions and results.

 

2. Do you have JMP Pro...time series functional data exploration and analysis is one of the signature capabilities in JMP Pro and your study is tailor made for Functional Data Exploration. For more details I suggest watching this on demand webinar: Functional Data Explorer 

statman
Super User

Re: Model prediction

Unfortunately to develop good models in terms of prediction is not a trivial exercise.  R^2 is an enumerative statistic.  It applies only to the data in hand. The RMSE of the model is an enumerative statistic and applies only to the data set in hand.  Extrapolation of the model is not enumerative problem (See Deming: Enumerative vs. Analytical problems), but an exercise in acquiring data that represents future conditions.  Apparently the data sets you are using to model the existing data do not represent the future conditions.  No statistical method can help you here.  There may be issues with your measurement system as @P_Bartell suggests or many other reasons why your data set does not represent future conditions.  Is the process stable (per Shewhart)? Perhaps you should do directed sampling to understand the process before trying to model it.  I doubt all 100 predictor variables have the same influence on the pertinent response variables (See principle of Scarcity of Effects).

"All models are wrong, some are useful" G.E.P. Box

Re: Model prediction

A few more points to consider: there are multiple options to consider on the models that you have tried which may improve the fit. For the random forest: how many trees did you use? Did you try more? How large was each tree? Did you try to grow them bigger? What makes you think that the neural network with 3 nodes was enough? Did you try multiple trips? Did you try using a boosted neural network? If so, did you try changing the learning rate? If you are seeing a really big difference between the validation and the test set performance, did something change that would make the test set that different? It may also call into question how you split your data between training, validation, and test.

 

But if you are comfortable with the training, validation, and test set creations and believe them all to be representative, there are options and ways to improve models. Given enough nodes and search iterations, a neural network can provide a perfect fit because it is a universal approximator. But that is not desirable as it will only fit that data, not future data (see @statman's comments). So remember that expecting a "perfect" prediction is not realistic. Both @P_Bartell  and @statman point out things to consider. You really should not expect to do better than your measurement system error. That allows you to set a realistic target for your prediction error.

Dan Obermiller
P_Bartell
Level VIII

Re: Model prediction

I concur with everything @Dan_Obermiller  and @statman  recommend or state.

 

One other thought for you...since you seem to be working with happenstance historical production processes and data, do you have the capability to run designed experiments? Reason I'm going down this road is if your models based on historical information aren't particularly extensible to future observations, then quite possibly there may be other important causal variables entering the system that the historical data is not considering from an effect modeling point of view...but you may be able to include some of these in a DOE centric investigation.

 

You are on a hunt for some needles in a haystack and I know of no more efficient method of finding them than DOE. For example, maybe there is raw material variation in production that you aren't accounting for with the historical process data, but in a DOE type investigation you've got several options for handling this sort of thing such as blocking. Or if as @statman suggests if effect scarcity is present, maybe a Definitive Screening Design could work?

 

The idea here is leverage your past modeling work and what you've learned, with your process knowledge, and use the power of DOE to help you find that 'useful model'.

 

During my days in industry, one wise engineer I worked with had a saying, "Until we can turn a failure mode on and off at will, we don't understand the process." It was DOE that was the single most efficient method that got us to "...understand the process."

dale_lehman
Level VII

Re: Model prediction

I defer to the other comments here, as those people know more than I do about such things.  But there is one issue they haven't mentioned which I thought I'd raise.  Since this is multivariate time series data, what sense is there in using random validation and test data sets?  Doesn't this ignore the time structure in the data?  Of course, you can use neural nets, random forests, etc., but I wouldn't expect the performance to be very good on new data since those models have ignored the time structure in the data.  For time series data, I would think that the test data should be a contiguous range of observations, so the time structure can be modeled and tested.  So, is it appropriate to use these methods, with random validation and test data sets, on time series data such as you have?

Re: Model prediction

As you state, you do not randomly assign observations to training, validation, and test. I did not see anybody say that a random assignment was made, but if it was, that should not be done. You make the assignment according to time. The training set is the oldest, validation is the next set, and test is the last group. This way provides a good indication of forecast ability.

 

The training, validation, and test sets need to be representative of the stable process. If you fit training and validation well, but not the test, that naturally leads one to believe that something happened to alter the test set. Whatever it was occurred/changed should be another predictor in the model.

Dan Obermiller
FR60
Level IV

Re: Model prediction

Thanks to all people that replied to my post. All right observations. I tried to do some simple experiment to better understand the issue.

In the attached ppt my conclusions and results. Any comment and suggestion is welcome.

 

Felice