cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Check out the JMP® Marketplace featured Capability Explorer add-in
Choose Language Hide Translation Bar
hluo90
Level I

Creating a simulation model and experimental model and comparing

So I have some experimental data and some simulation data and I would like to compare the sensitivities of the factors, and see how they line up. 

This is similar to: How do I make a prediction model based on my old experiment data? 

 

I'm new to JMP and so hopefully I'm doing this right. 

Attached is the simulation (2) and experimental data (3).

 

1. Is  using the Fit Model command  on the two data sets and comparing the parameter esimates the correct approach? 

2. If so, when running both sets of data, I get the same values for the Std Errors for X1 X2 X3. I was under the impression that the standard errors are relate to the fit quality of the parameter estimate, but I don't see how they could be the same for all 3. Is there something I'm missing here?

 

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions
statman
Super User

Re: Creating a simulation model and experimental model and comparing

Sorry, but it is very difficult for me to answer your question when I don't understand the situation or the experiment or the analysis you did.  How much change in Y is of scientific or engineering interest?  The total range in Y over the 810 rows of data is 0.44.  Is that of practical significance?  You have multiple data points for each treatment combination.  Are these repeats or randomized replicates?  If I analyze the actual experiment treating as randomized replicates, I find at least 6 data points that are unusual (via residuals analysis). The RSquare is about .49, which means the model used for the analysis only explains about 49% of the variation in the data.  Only the main effects and possibly the quadratic effect are of statistical interest.  Have you reduced the model?  Does the model make sense?  

statman_0-1609775004079.jpeg

 

statman_1-1609775025265.jpeg

Source

LogWorth

 

PValue

X1(0.85,1.15)

101.912

statman_2-1609775093215.jpeg

 

0.00000

X3(0.85,1.15)

20.133

statman_3-1609775093215.jpeg

 

0.00000

X2(0.85,1.15)

13.329

statman_4-1609775093215.jpeg

 

0.00000

X2*X2

2.563

statman_5-1609775093215.jpeg

 

0.00274

X1*X1

1.030

statman_6-1609775093215.jpeg

 

0.09336

X1*X3

0.970

statman_7-1609775093215.jpeg

 

0.10727

X3*X3

0.925

statman_8-1609775093215.jpeg

 

0.11896

X1*X2

0.607

statman_9-1609775093215.jpeg

 

0.24733

X2*X3

0.294

statman_10-1609775093215.jpeg

 

0.50874

"All models are wrong, some are useful" G.E.P. Box

View solution in original post

7 REPLIES 7
P_Bartell
Level VIII

Re: Creating a simulation model and experimental model and comparing

Disclaimer: I have not looked at the data, models, or any analysis that one would create for your problem.

 

I'll focus most of my commentary on your first question...with many questions and some thoughts.

 

1. Can you share more about the practical problem at hand? I believe any analysis and conclusions need to be considered and filtered through that lens vs. just looking at plots, data, and statistics.

2. Blind numeric comparison of parameter estimates is one, and only one way to compare these two systems. I can all but guarantee they will differ. Heck, rerun the empirical experiment again, and I'll all but guarantee a numeric difference. But are these differences important from a practical point of view? See question #1. above.

3. How much overlap within the factor space exists between the simulation data and the empirical data? Are there areas of interest that are not consistent?

4. How much overlap is there within the response space from each? Are there areas of interest that are not consistent?

5. What is the degree of consistency across the residual space for each model? After all...this is an estimate of unexplained variation for each model...is the degree of inconsistency problematic?

6. Have you tried simulation for each model to see what the sensitivities are over the modeling space?

 

I invite others to comment and add their thoughts. But at the end of the day I think just looking at parameter estimates is a very, very narrow view of 'what's going on in the system?'

hluo90
Level I

Re: Creating a simulation model and experimental model and comparing

1. Sure, so essentially I have a simulation of circuit performance varying some parameters. I also have measured results of the same circuit with those same parameters varied. I would like to show that the measured results either matched or did not match the simulation. 

 

2. The parameter estimate in this case would be the circuit parameter that was varied, and so I think it would be important from a practical point of view.

 

3. For now its the same, but the simulation model should probably have more points. 

 

4-6, Not sure I understand. 

 

Ultimately, I'm just trying to show that simulation accurately predicts the fabricated devices. 

P_Bartell
Level VIII

Re: Creating a simulation model and experimental model and comparing

@hluo90 With my question #4 what I'm asking is how do the distribution of the response compare between the simulation and empirical experiments. I'd probably start with a histogram of each and compare the spreads, shapes, and central tendency of each distribution. Then examine the degree to which the distributions match for these characteristics.

 

With my question #5, each experiment produces a model. Each model can be used to make predictions. The arithmetic difference between the observed and predicted values are the residuals. How do the residuals distributions compare?

 

With my question #5, using something like JMP's Monte Carlo simulation capability you can vary the predictor variables in each model and see how the responses change based on that variation.

statman
Super User

Re: Creating a simulation model and experimental model and comparing

I don't understand your question.  What do you mean by "sensitivities of factors"?  Are you trying to compare the 2 models from each data set?  If so, as @P_Bartell indicates, there are many characteristics of the model that could be compared (R-squares, RMSE, coefficients (Estimates), Residuals (In fact you have unusual residuals in your DOE data). After briefly looking at the 2 data sets, I'm not sure what analysis you are doing, but I do not get the same std errors for the parameter estimates, nor do I get the same statistically significant parameters in the models.

How was the simulation created?  What model was used for the simulation?

"All models are wrong, some are useful" G.E.P. Box
hluo90
Level I

Re: Creating a simulation model and experimental model and comparing

I mean the standard errors of the parameter estimates. After understanding a bit more how they are calculated being the  

total sum of the error / variaton in the parameter, it seems that the standard errors of the parameters should be the same. 

 

Back to the goal, yes, I am trying to compare the 2 models from each data set. The data set is a variation of parameters in a circuit. What is the standard / best way to compare the two models and to show that varying the parameters in a circuit in both sim / hardware give the same / similar results? 

statman
Super User

Re: Creating a simulation model and experimental model and comparing

Sorry, but it is very difficult for me to answer your question when I don't understand the situation or the experiment or the analysis you did.  How much change in Y is of scientific or engineering interest?  The total range in Y over the 810 rows of data is 0.44.  Is that of practical significance?  You have multiple data points for each treatment combination.  Are these repeats or randomized replicates?  If I analyze the actual experiment treating as randomized replicates, I find at least 6 data points that are unusual (via residuals analysis). The RSquare is about .49, which means the model used for the analysis only explains about 49% of the variation in the data.  Only the main effects and possibly the quadratic effect are of statistical interest.  Have you reduced the model?  Does the model make sense?  

statman_0-1609775004079.jpeg

 

statman_1-1609775025265.jpeg

Source

LogWorth

 

PValue

X1(0.85,1.15)

101.912

statman_2-1609775093215.jpeg

 

0.00000

X3(0.85,1.15)

20.133

statman_3-1609775093215.jpeg

 

0.00000

X2(0.85,1.15)

13.329

statman_4-1609775093215.jpeg

 

0.00000

X2*X2

2.563

statman_5-1609775093215.jpeg

 

0.00274

X1*X1

1.030

statman_6-1609775093215.jpeg

 

0.09336

X1*X3

0.970

statman_7-1609775093215.jpeg

 

0.10727

X3*X3

0.925

statman_8-1609775093215.jpeg

 

0.11896

X1*X2

0.607

statman_9-1609775093215.jpeg

 

0.24733

X2*X3

0.294

statman_10-1609775093215.jpeg

 

0.50874

"All models are wrong, some are useful" G.E.P. Box
hluo90
Level I

Re: Creating a simulation model and experimental model and comparing

I would expect Y to not change too much, maybe .1 ish. The 6 points you highlighted are likely test issues. 

The repeats are measurements of different devices, but the same design parameters. 

 

How do you analyze it as a randomized replicates?

 

By reducing the model, do you mean removing the extra terms? 

 

To be honest, when I was looking at this originally, it didnt make sense to me that the STD errors were all the same and were all so low. After reading more about statistics, it kind of made sense how the errors were calculated. I suppose I was hoping to show that the parameter estimates were within "1 std deviation" of the measured, since the measurement had very noisy results. When I compared the parameter estimates I thought the experimental and simulation more or less tracked and so I was pretty happy with the result.